0
我想下面的分类图表转换成回归,而不是让,而不是3个值返回只有一个值 -Tensorflow分类回归“分配既需要张量的形状以匹配”
baseFeatureSize = 5
keep_prob = tf.placeholder(tf.float32)
x = tf.placeholder(tf.float32, shape=[None, 64])
x_image = tf.reshape(x, [-1, 8, 8, 1])
W_conv1 = weight_variable([5, 5, 1, baseFeatureSize])
b_conv1 = bias_variable([baseFeatureSize])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
W_conv2 = weight_variable([8, 8, baseFeatureSize, baseFeatureSize * 2])
b_conv2 = bias_variable([baseFeatureSize * 2])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)
W_fc1 = weight_variable([8 * 8 * baseFeatureSize * 2, baseFeatureSize * 4])
b_fc1 = bias_variable([baseFeatureSize * 4])
h_pool2_flat = tf.reshape(h_conv2, [-1, 8 * 8 * baseFeatureSize * 2])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc3 = weight_variable([baseFeatureSize * 4, 3])
b_fc3 = bias_variable([3])
y_policy = tf.placeholder(tf.float32, shape=[None, 3])
y_policy_conv = tf.matmul(h_fc1, W_fc3) + b_fc3
cross_entropy_policy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_policy, logits=y_policy_conv))
train_step_policy = tf.train.AdamOptimizer(learning_rate = 0.01).minimize(cross_entropy_policy)
对于它作为回归工作,我已经改变了完全连接的部分 - W_fc3,b_fc3,输出,交叉熵和train_step使得张量的形状尺寸为1而不是3,如下所示(图的其余部分保持不变) -
W_fc3 = weight_variable([baseFeatureSize * 4, 1])
b_fc3 = bias_variable([1])
y_policy = tf.placeholder(tf.float32, shape=[None, 1])
y_policy_conv = tf.nn.softmax(tf.matmul(h_fc1, W_fc3) + b_fc3)
cross_entropy_policy = tf.reduce_mean(-tf.reduce_sum(y_policy * tf.log(y_policy_conv), reduction_indices=1))
train_step_policy = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy_policy)
但它一直抛出以下错误 -
InvalidArgumentError(请参阅上面的回溯):分配要求两个张量的形状要匹配。 lhs shape = [1] rhs shape = [3]
我无法在任何地方看到3。什么可能是错的?
您可以添加您将值分配给张量的代码吗? – rmeertens
对不起,我刚刚意识到出了什么问题。主管的logdir指向该模型的以前版本(分类版本)。我指出它是一个空的logdir,它工作。感谢您查看它。 – Achilles