2017-03-27 66 views
1

我正在使用一个Alexnet,用5个班级的花朵数据进行调整。现在,我想预测使用微调模型。下面显示的是主代码。张量流占位符错误。

import os 
import numpy as np 
import tensorflow as tf 
from datetime import datetime 
from alexnet_flower import AlexNet 
from datagenerator import ImageDataGenerator 

from scipy.misc import imread 
from scipy.misc import imresize 
import time 
import matplotlib.image as mpimg 
from scipy.ndimage import filters 
import urllib 
from numpy import random 
from numpy import * 
import os 
from pylab import * 
import numpy as np 
import matplotlib.pyplot as plt 
import matplotlib.cbook as cbook 

from tensorflow.core.protobuf import saver_pb2 

im1 = (imread("one.png")[:,:,:3]).astype(float32) 
#print(im1.shape()) 
im1 = im1 - mean(im1) 
#im1 = imresize(im1,[227,227,3]) 
im1[:, :, 0], im1[:, :, 2] = im1[:, :, 2], im1[:, :, 0] 

im2 = (imread("two.png")[:,:,:3]).astype(float32) 
im2 = im2 - mean(im2) 
#im2 = imresize(im2,[227,227,3]) 
im2[:, :, 0], im2[:, :, 2] = im2[:, :, 2], im2[:, :, 0] 

""" 
Configuration settings 
""" 

print(im1.shape) 
num_classes = 5 
x = tf.placeholder(tf.float32, [2, 227, 227, 3]) 
#y = tf.placeholder(tf.float32, [None, num_classes]) 
keep_prob = tf.placeholder(tf.float32) 

#print(x) 

# Initialize model 
model = AlexNet(x,keep_prob,num_classes) 

# Link variable to model output 
score = model.fc8 

saver = tf.train.Saver(write_version = saver_pb2.SaverDef.V1) 

#x1 = tf.placeholder(tf.float32, (None,) + xdim) 

with tf.Session() as sess: 

    # Initialize all variables 
    sess.run(tf.global_variables_initializer()) 

    # Add the model graph to TensorBoard 

    # Load the pretrained weights into the non-trainable layer 
    saver.restore(sess,"/home/saurabh/deep_learning/tests/finetune_alexnet_with_tensorflow/model_epoch1.ckpt") 
# x1:[im1,im2] 
    print('error!!!!!!') 

    output = sess.run(score, feed_dict = {x:[im1,im2]}) 

我使用此代码的代码为alexnet。我认为alexnet代码没有问题,因为我使用此代码进行了微调。

最后我得到了这个错误。我试了很多来调试它,我无法理解这个问题。谢谢你的帮助。

Traceback (most recent call last): 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1021, in _do_call 
    return fn(*args) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1003, in _run_fn 
    status, run_metadata) 
    File "/home/saurabh/anaconda3/lib/python3.6/contextlib.py", line 89, in __exit__ 
    next(self.gen) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status 
    pywrap_tensorflow.TF_GetCode(status)) 
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float 
    [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

During handling of the above exception, another exception occurred: 

Traceback (most recent call last): 
    File "finetune_prediction_flowers.py", line 81, in <module> 
    output = sess.run(score, feed_dict = {x:[im1,im2]}) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 766, in run 
    run_metadata_ptr) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 964, in _run 
    feed_dict_string, options, run_metadata) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run 
    target_list, options, run_metadata) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call 
    raise type(e)(node_def, op, message) 
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float 
    [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

Caused by op 'Placeholder_1', defined at: 
    File "finetune_prediction_flowers.py", line 56, in <module> 
    keep_prob = tf.placeholder(tf.float32) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1587, in placeholder 
    name=name) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2043, in _placeholder 
    name=name) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op 
    op_def=op_def) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2240, in create_op 
    original_op=self._default_original_op, op_def=op_def) 
    File "/home/saurabh/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1128, in __init__ 
    self._traceback = _extract_stack() 

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float 
    [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

回答

0

的错误信息是非常明确的:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float 
    [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

含义:你必须养活值的占位符keep_prob,因为您在您的来电sess.run,这又需要keep_prob有要求score一个值。因此,只要保概率设置为你想要的(在这里例如0.8)的值:

with tf.Session() as sess:  
    # Load the pretrained weights into the non-trainable layer 
    saver.restore(sess,"/home/saurabh/deep_learning/tests/finetune_alexnet_with_tensorflow/model_epoch1.ckpt")  
    output = sess.run(score, feed_dict = {x:[im1,im2], keep_prob: 0.8}) 

顺便说一句:如果你是从检查点恢复,没有必要调用sess.run(tf.global_variables_initializer())

+0

谢谢!至少错误消失了。你能告诉我这个概率分数用于什么吗?此外,我应该在代码中做出什么样的改变,以获得特定图像的每个类的概率值而不是分数,因为之前我使用相同的代码来获得概率值。提前致谢! – talos1904

+0

我得到了我必须做出的变化来获得概率。你能告诉我为什么在这里给出概率值,它有什么用处?谢谢! – talos1904

+0

我不知道'AlexNet',但通常保持概率是指退出正则化。在退出期间,一个单位保持(即不变),概率为“keep_prob”,并以概率“1 - keep_prob”下降。通常,退出只在训练期间使用,因此在测试时考虑禁用退出,即将'keep_prob'设置为1.0。但是,你将不得不检查'AlexNet'的实现,以确保这个参数指向丢失正则化。 – kaufmanu