2017-04-22 106 views
0

我想在张量流中添加更多图层到我的神经网络,但是在这里我收到了这个错误。用TensorFlow添加更多图层错误

ValueError: Dimensions must be equal, but are 256 and 784 for 'MatMul_1' (op: 'MatMul') with input shapes: [?,256], [784,256]. 

这就是我如何创建权重和偏见。

# Store layers weight & bias 
weights = { 
    'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes])) 
} 
biases = { 
    'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 

这里就是我在做我的模型

# Hidden layer with RELU activation 
layer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']), biases['hidden_layer']) 
layer_1 = tf.nn.relu(layer_1) 
layer_1 = tf.nn.dropout(layer_1, keep_prob) 

layer_2 = tf.add(tf.matmul(layer_1, weights['hidden_layer']), biases['hidden_layer']) 
layer_2 = tf.nn.relu(layer_2) 
layer_2 = tf.nn.dropout(layer_2, keep_prob) 
# Output layer with linear activation 
logits = tf.matmul(layer_2, weights['out']) + biases['out'] 

和错误是最有可能在layer_2。我正在使用MNIST数据集。同时,也是XY,一个xflat被重塑到

x shape is (?, 28, 28, 1) 
y shape is (?, 10) 
x flat shape is (?, 784) 

回答

1

你或许应该使用不同的权重和偏见的1层和2层

的问题是,无论是第1层和2层是为投入制作大小为784.但第1层的输出大小为256,因此第2层无法使用。

具体来说,你尝试在这一行乘以矩阵layer_1weights['hidden_layer']有不兼容的尺寸:

layer_2 = tf.add(tf.matmul(layer_1, weights['hidden_layer']), biases['hidden_layer']) 

这可能代替工作:

# Store layers weight & bias 
weights = { 
    'layer_1': tf.Variable(tf.random_normal([n_input, n_hidden_layer])), 
    'layer_2': tf.Variable(tf.random_normal([n_hidden_layer, n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes])) 
} 
biases = { 
    'layer_1': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'layer_2': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 

# Hidden layer with RELU activation 
layer_1 = tf.add(tf.matmul(x_flat, weights['layer_1']), biases['layer_1']) 
layer_1 = tf.nn.relu(layer_1) 
layer_1 = tf.nn.dropout(layer_1, keep_prob) 

layer_2 = tf.add(tf.matmul(layer_1, weights['layer_2']), biases['layer_2']) 
layer_2 = tf.nn.relu(layer_2) 
layer_2 = tf.nn.dropout(layer_2, keep_prob) 
# Output layer with linear activation 
logits = tf.matmul(layer_2, weights['out']) + biases['out'] 
+0

是ofcourse ,那是一件很简单的事情,我完全忘记了我s使用相同的权重和偏差。非常感谢,我强调了一个小时,完全忽略了这一点。 谢谢。 –