2017-10-19 31 views
4

最近我正在使用Tensorflow。我正在探索如何在Tensorflow中实现多层Perceptron。如何在MLP中创建可变数量的图层

我在网上通过了很多教程。他们大多数利用一个或两个隐藏层。一个简单的例子取自here

def forwardprop(X, w_1, w_2): 
    """ 
    Forward-propagation. 
    IMPORTANT: yhat is not softmax since TensorFlow's 
    softmax_cross_entropy_with_logits() does that internally. 
    """ 
    h = tf.nn.sigmoid(tf.matmul(X, w_1)) # The \sigma function 
    yhat = tf.matmul(h, w_2) # The \varphi function 
    return yhat 

X = tf.placeholder("float", shape=[None, x_size]) 
y = tf.placeholder("float", shape=[None, y_size]) 

# Weight initializations 
w_1 = init_weights((x_size, h_size)) 
w_2 = init_weights((h_size, y_size)) 

# Forward propagation 
out = forwardprop(X, w_1, w_2) 

在这段代码中,有一个隐藏层。现在我想知道如果我想构建一个可变数量的分层全连接神经网络。

假设列表h_archi = [100 150 100 50]其中每个值表示第i层神经元的数量(在这种情况下层的总数为4)。因此,对于层实现的变量数,我编写了以下丑陋的代码,

emb_vec = tf.Variable(tf.random_normal([vocabulary_size, EMBEDDING_DIM]), name="emb_vec") 

    tot_layer = len(h_archi) 
    op = np.zeros(tot_layer+1) 
    hid_rep = np.zeros(tot_layer+1) 
    bias = np.zeros(tot_layer+1) 

    op[0] = tf.matmul(x, emb_vec) 

    for idx,tot_neu in enumerate(h_archi): 
     assert(tot_neu > 0) 
     layer_no = idx+1 
     r,c = op[layer_no-1].get_shape() 
     hid_rep[layer_no] = tf.Variable(tf.random_normal([c,tot_neu]),name="hid_{0}_{1}".format(layer_no-1,layer_no)) 
     bias[layer_no] = tf.Variable(tf.random_normal([tot_neu]), name="bias_{0}".format(layer_no)) 
     op[layer_no] = tf.add(tf.matmul(op[layer_no-1],hid_rep[layer_no]),bias[layer_no]) 

    r,c = op[tot_layer].get_shape() 
    last_layer = tf.Variable(tf.random_normal([c,output_size]),name="hid_{0}_{1}".format(tot_layer,"last_layer")) 
    bias_last = tf.Variable(tf.random_normal([output_size]), name="bias_last") 
    output = tf.add(tf.matmul(op[tot_layer],last_layer)) 
    prediction = tf.nn.softmax(output) 

此代码是完全错误的,因为tensorflow不支持赋值操作。那么设计这种东西的正确方法是什么?

回答

2

你可以做这样的事情,而不是你的循环:

last_layer=x 
    for idx,tot_neu in enumerate(h_archi): 
     assert(tot_neu > 0) 
     layer_no = idx+1 
     r,c = last_layer.get_shape() 
     weights_ = tf.Variable(tf.random_normal([c,tot_neu]),name="hid_{0}_{1}".format(layer_no-1,layer_no)) 
     bias_ = tf.Variable(tf.random_normal([tot_neu]), name="bias_{0}".format(layer_no)) 
     last_layer = tf.add(tf.matmul(last_layer,weights_),bias_) 
    r,c = last_layer.get_shape() 

如果您需要访问中介张量(偏差,重量,层等),你可以只将它们存储在一个每一步列表例如

+0

非常感谢您的帮助。 –

相关问题