2016-05-15 78 views
2

我想制作一个ConvNet具有与输入之一相同的输出大小。所以,我使用TFLearn库实现了它。因为我只是想要一个满足这些目的的简单示例,所以我只设置一个具有零填充的卷积层,以便与输入具有相同的输出大小。以下是代码:TensorFlow/TFLearn:ValueError:无法提供形状为'(?,64)'的张量u'TargetsData/Y:0'形状(256,400,400)的值。

X = X.reshape([-1, 400, 400, 1]) 
Y = Y.reshape([-1, 400, 400, 1]) 
testX = testX.reshape([-1, 400, 400, 1]) 
testY = testY.reshape([-1, 400, 400, 1]) 
X, mean = du.featurewise_zero_center(X) 
testX = du.featurewise_zero_center(testX, mean) 


# Building a Network 
net = tflearn.input_data(shape=[None, 400, 400, 1]) 
net = tflearn.conv_2d(net, 64, 3, padding='same', activation='relu', bias=False) 
sgd = tflearn.SGD(learning_rate=0.1, lr_decay=0.96, decay_step=300) 
net = tflearn.regression(net, optimizer='sgd', 
        loss='categorical_crossentropy', 
        learning_rate=0.1) 
# Training 
model = tflearn.DNN(net, checkpoint_path='model_network', 
       max_checkpoints=10, tensorboard_verbose=3) 
model.fit(X, Y, n_epoch=100, validation_set=(testX, testY), 
     show_metric=True, batch_size=256, run_id='network_test') 

然而,这些代码产生

ValueError: Cannot feed value of shape (256, 400, 400) for Tensor u'TargetsData/Y:0', which has shape '(?, 64)' 

我已经搜查,并检查一些文件的错误,但我似乎无法得到这个工作。

+0

我不熟悉的TF API,但'testX'不会'testX = du.featurewise_zero_center(testX,mean)'后面的元组吗? – erip

+0

@erip对不起,我已经省略了标题部分。该行来自'import tflearn.data_utils as du'。这里,'tflearn.data_utils'是一个与数据预处理相关的头文件。 – David

+0

当然,但它返回X,并且意味着高于。在它下面只返回testX。 – erip

回答

1

问题是您的convnet输出形状为(无,64),但是您的目标数据(标签)的形状为(无,400,400)。我不确定你想要用你的代码做什么,你想要做一些自动编码?还是用于分类任务?

对于自动编码器,下面是MNIST卷积编码器的汽车,你可以用自己的数据与之相适应并改变input_data形状:

from __future__ import division, print_function, absolute_import 

import numpy as np 
import matplotlib.pyplot as plt 
import tflearn 
import tflearn.data_utils as du 

# Data loading and preprocessing 
import tflearn.datasets.mnist as mnist 
X, Y, testX, testY = mnist.load_data(one_hot=True) 

X = X.reshape([-1, 28, 28, 1]) 
testX = testX.reshape([-1, 28, 28, 1]) 
X, mean = du.featurewise_zero_center(X) 
testX = du.featurewise_zero_center(testX, mean) 

# Building the encoder 
encoder = tflearn.input_data(shape=[None, 28, 28, 1]) 
encoder = tflearn.conv_2d(encoder, 16, 3, activation='relu') 
encoder = tflearn.max_pool_2d(encoder, 2) 
encoder = tflearn.conv_2d(encoder, 8, 3, activation='relu') 
decoder = tflearn.upsample_2d(encoder, 2) 
decoder = tflearn.conv_2d(encoder, 1, 3, activation='relu') 

# Regression, with mean square error 
net = tflearn.regression(decoder, optimizer='adam', learning_rate=0.001, 
         loss='mean_square', metric=None) 

# Training the auto encoder 
model = tflearn.DNN(net, tensorboard_verbose=0) 
model.fit(X, X, n_epoch=10, validation_set=(testX, testX), 
      run_id="auto_encoder", batch_size=256) 

# Encoding X[0] for test 
print("\nTest encoding of X[0]:") 
# New model, re-using the same session, for weights sharing 
encoding_model = tflearn.DNN(encoder, session=model.session) 
print(encoding_model.predict([X[0]])) 

# Testing the image reconstruction on new data (test set) 
print("\nVisualizing results after being encoded and decoded:") 
testX = tflearn.data_utils.shuffle(testX)[0] 
# Applying encode and decode over test set 
encode_decode = model.predict(testX) 
# Compare original images with their reconstructions 
f, a = plt.subplots(2, 10, figsize=(10, 2)) 
for i in range(10): 
    a[0][i].imshow(np.reshape(testX[i], (28, 28))) 
    a[1][i].imshow(np.reshape(encode_decode[i], (28, 28))) 
f.show() 
plt.draw() 
plt.waitforbuttonpress() 
+0

谢谢你的好评。另外,我想知道为什么'padding ='same''不能和'net = tflearn.conv_2d(net,64,3,padding ='same',activation ='relu',偏置=假)' – David

相关问题