从部门Truncated Backpropagation的官方PTB google教程中,有一个实现使用BasicLSTMCell,方法是创建一个for循环,展开num_steps步骤的图形。Tensorflow中的LSTM反向传播
# Placeholder for the inputs in a given iteration.
words = tf.placeholder(tf.int32, [batch_size, num_steps])
lstm = rnn_cell.BasicLSTMCell(lstm_size)
# Initial state of the LSTM memory.
initial_state = state = tf.zeros([batch_size, lstm.state_size])
for i in range(num_steps):
# The value of state is updated after processing each batch of words.
output, state = lstm(words[:, i], state)
# The rest of the code.
# ...
final_state = state
我必须让使用BasicLSTMCell预测时间序列与尊重,我不使用任何循环在图表上,但我在程序执行循环更新lstmCells'状态的实现。下面是代码:
input_layer = tf.placeholder(tf.float32, [input_width, input_dim * 1])
lstm_cell1 = tf.nn.rnn_cell.BasicLSTMCell(input_dim * input_width)
lstm_state1 = tf.Variable(tf.zeros([input_width,lstm_cell1.state_size]))
lstm_output1, lstm_state_output1 = lstm_cell1(input_layer, lstm_state1, scope='LSTM1')
lstm_update_op1 = lstm_state1.assign(lstm_state_output1)
for i in range(39000):
input_v, output_v = get_new_input_output(i, A)
_, _, network_output = sess.run([lstm_update_op1, train_step, final_output],
feed_dict={input_layer: input_v, correct_output: output_v})
如何第二种实现通过时间实现THA反向传播,并在这一个tensorflow正确使用lstmCell的。 Personaly我更喜欢第二个实现,因为我发现它更清晰,并且还能够支持数据流。但谷歌提出第一个实现的事实让我怀疑我做错了什么。