2016-09-23 118 views
3

值误差这是我下面的代码:Tensorflow:与variable_scope

''' 
Tensorflow LSTM classification of 16x30 images. 
''' 

from __future__ import print_function 

import tensorflow as tf 
from tensorflow.python.ops import rnn, rnn_cell 
import numpy as np 
from numpy import genfromtxt 
from sklearn.cross_validation import train_test_split 
import pandas as pd 

''' 
a Tensorflow LSTM that will sequentially input several lines from each single image 
i.e. The Tensorflow graph will take a flat (1,480) features image as it was done in Multi-layer 
perceptron MNIST Tensorflow tutorial, but then reshape it in a sequential manner with 16 features each and 30 time_steps. 
''' 

blaine = genfromtxt('./Desktop/Blaine_CSV_lstm.csv',delimiter=',') # CSV transform to array 
target = [row[0] for row in blaine]    # 1st column in CSV as the targets 
data = blaine[:, 1:481]       #flat feature vectors 
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.05, random_state=42) 

f=open('cs-training.csv','w')  #1st split for training 
for i,j in enumerate(X_train): 
     k=np.append(np.array(y_train[i]),j ) 
     f.write(",".join([str(s) for s in k]) + '\n') 
f.close() 
f=open('cs-testing.csv','w')  #2nd split for test 
for i,j in enumerate(X_test): 
     k=np.append(np.array(y_test[i]),j ) 
     f.write(",".join([str(s) for s in k]) + '\n') 
f.close() 



new_data = genfromtxt('cs-training.csv',delimiter=',') # Training data 
new_test_data = genfromtxt('cs-testing.csv',delimiter=',') # Test data 

x_train=np.array([ i[1::] for i in new_data]) 
ss = pd.Series(y_train)  #indexing series needed for later Pandas Dummies one-hot vectors 
y_train_onehot = pd.get_dummies(ss) 

x_test=np.array([ i[1::] for i in new_test_data]) 
gg = pd.Series(y_test) 
y_test_onehot = pd.get_dummies(gg) 


# General Parameters 
learning_rate = 0.001 
training_iters = 100000 
batch_size = 33 
display_step = 10 

# Tensorflow LSTM Network Parameters 
n_input = 16 # MNIST data input (img shape: 28*28) 
n_steps = 30 # timesteps 
n_hidden = 128 # hidden layer num of features 
n_classes = 20 # MNIST total classes (0-9 digits) 

# tf Graph input 
x = tf.placeholder("float", [None, n_steps, n_input]) 
y = tf.placeholder("float", [None, n_classes]) 

# Define weights 

weights = { 
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes])) 
} 
biases = { 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 


def RNN(x, weights, biases): 

    # Prepare data shape to match `rnn` function requirements 
    # Current data input shape: (batch_size, n_steps, n_input) 
    # Required shape: 'n_steps' tensors list of shape (batch_size, n_input) 

    # Permuting batch_size and n_steps 
    x = tf.transpose(x, [1, 0, 2]) 
    # Reshaping to (n_steps*batch_size, n_input) 
    x = tf.reshape(x, [-1, n_input]) 
    # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input) 
    x = tf.split(0, n_steps, x) 

    # Define a lstm cell with tensorflow 
    with tf.variable_scope('cell_def'): 
     lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden, forget_bias=1.0) 

    # Get lstm cell output 
    with tf.variable_scope('rnn_def'): 
     outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32) 

    # Linear activation, using rnn inner loop last output 
    return tf.matmul(outputs[-1], weights['out']) + biases['out'] 

pred = RNN(x, weights, biases) 

# Define loss and optimizer 
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) 

# Evaluate model 
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) 

# Initializing the variables 
init = tf.initialize_all_variables() 

# Launch the graph 
with tf.Session() as sess: 
    sess.run(init) 
    step = 1 
    # Keep training until reach max iterations 
    while step * batch_size < training_iters: 
     batch_x = np.split(x_train, 15) 
     batch_y = np.split(y_train_onehot, 15) 
     for index in range(len(batch_x)): 
      ouh1 = batch_x[index] 
      ouh2 = batch_y[index] 
      # Reshape data to get 28 seq of 28 elements 
      ouh1 = np.reshape(ouh1,(batch_size, n_steps, n_input))   
      sess.run(optimizer, feed_dict={x: ouh1, y: ouh2})  # Run optimization op (backprop) 
      if step % display_step == 0: 
       # Calculate batch accuracy 
       acc = sess.run(accuracy, feed_dict={x: ouh1, y: ouh2}) 
       # Calculate batch loss 
       loss = sess.run(cost, feed_dict={x: ouh1, y: ouh2}) 
       print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \ 
        "{:.6f}".format(loss) + ", Training Accuracy= " + \ 
        "{:.5f}".format(acc)) 
       step += 1 
print("Optimization Finished!") 

,我提示以下错误:那看来我重新迭代上线92和97相同的变量,而我关注的是,这可能是不兼容与Tensorflow 0.10.0在RNN DEF侧的情况下:

ValueError: Variable RNN/BasicLSTMCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: 

    File "/home/mohsen/lstm_mnist.py", line 92, in RNN 
    outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32) 
    File "/home/mohsen/lstm_mnist.py", line 97, in <module> 
    pred = RNN(x, weights, biases) 
    File "/home/mohsen/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile 
    builtins.execfile(filename, *where) 

可能是什么这个错误的根源以及如何解决它?

编辑:从我建立在我的代码相同variable_scope问题仍然存在https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py

+1

您不是迭代第92和97行中的同一个变量,因为它们将始终位于相同的名称空间中,因为您从另一个名称空间中调用一个名称空间(因为其中嵌入了RNN函数)。所以你的有效变量范围就像“后退/前进”一样。 问题是线89和92。所以,请你代替尝试: 与tf.variable_scope( 'cell_def'): lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden,forget_bias = 1.0) 与TF .variable_scope('rnn_def'): outputs,states = tf.nn.rnn(lstm_cell,x,dtype = tf.float32) – cleros

+0

问题消失了,但出现了一个带有'init'实例的新值:ValueError:Fetch argument < tensorflow.python.framework.ops.Operation对象在0x7ff12eeabc10>不能被解释为一个张量。 (操作u'init_2'被标记为不可捕获。) –

+0

这一个再次出现:ValueError:变量rnn_def/RNN/LSTMCell/W_0已经存在,不允许。你是否想在VarScope中设置reuse = True?最初定义在: 文件“”,第95行,在RNN 输出中,states = tf.nn.rnn(lstm_cell,x,dtype = tf.float32) File““,第100行,在 pred = RNN(x,权重,偏差) 文件”/home/mohsen/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell .py“,第2885行,在run_code exec(code_obj,self.user_global_ns,self.user_ns) –

回答

3

你是不是遍历同一变量线92和97,因为这些总是会在同一个命名空间中的原回购,至少在当前的设置中,因为您在另一个内部调用了一个命名空间(因为其中嵌入了RNN函数)。所以你的有效变量范围将如'backward/forward'

因此,在我的猜测中,问题出现在第89和第92行,因为两者都在同一个命名空间中(见上面),并且都可能引入一个名为RNN/BasicLSTMCell/Linear/Matrix的变量。所以,你应该更改您的代码如下:

# Define a lstm cell with tensorflow 
with tf.variable_scope('cell_def'): 
    lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden, forget_bias=1.0) 

# Get lstm cell output 
with tf.variable_scope('rnn_def'): 
    outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32) 

这使得LSTMCell初始化生活在一个命名空间 - "cell_def/*",以及完整的RNN在另一个初始化 - "rnn_def/*"