0
我使用苗条库训练我自己resnet_v2_152和的tf.train.batch形状失配
构建其上叠加了20个图像我自己的图像numpy的阵列数据自定义np.array数据。
这意味着我numpy的数组的大小将是
[224, 224, 20]
我没有问题,将数据转换成tfrecords使用字节转换时,让图像阵列数据预处理之后,但它总是显示的
错误INFO:tensorflow:Error reported to Coordinator:
<class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>,
Shape mismatch in tuple component 0. Expected [224,224,3], got [224,224,20]
和
OutOfRangeError (see above for traceback): FIFOQueue '_5_batch/fifo_queue'
is closed and has insufficient elements (requested 1, current size 0)
当我申请的tf.trai n.batch
下面是我的代码的一部分,
dataset = dataset_factory.get_dataset(
FLAGS.datasetname, FLAGS.dataset, FLAGS.dataset_dir)
network_fn = nets_factory.get_network_fn(
FLAGS.model_name,
num_classes=101,
is_training=True)
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
num_readers=4,
common_queue_capacity=20 * FLAGS.batch_size,
common_queue_min=10 * FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
label -= 0
preprocessing_name = FLAGS.preprocessing_name or FLAGS.model_name
image_preprocessing_fn = preprocessing_factory.get_preprocessing(preprocessing_name, is_training=True)
eval_image_size = FLAGS.eval_image_size or network_fn.default_image_size
image = image_preprocessing_fn(image, eval_image_size, eval_image_size)
#Batch size is 1
images, labels = tf.train.batch(
[image, label],
batch_size=FLAGS.batch_size,
num_threads=4,
capacity=5 * FLAGS.batch_size)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
#This part to see the fetched results
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(init_op)
im = sess.run(images)
l = sess.run(label)
coord.request_stop()
coord.join(threads)
我坚持遵循train_image_classifier.py的风格,因为我想用用超薄库提供的默认培养模式。
我会非常感谢您的帮助和解答。由于