2016-07-07 123 views
18

给定一个训练有素的LSTM模型,我想对单个时间步进执行推理,即下例中的seq_length = 1。在每个时间步后,内部LSTM(内存和隐藏)状态需要记住下一个'批'。对于推论的开始,内部LSTM状态init_c, init_h根据输入进行计算。然后将它们存储在传递给LSTM的对象LSTMStateTuple中。在训练期间,这个状态每次更新都会更新。然而,为了推断,我希望state可以在批次之间保存,即只需要在开始时计算初始状态,然后在每个“批量”(n = 1)之后应该保存LSTM状态。TensorFlow:记住下一批次的LSTM状态(有状态的LSTM)

我发现这个相关的StackOverflow问题:Tensorflow, best way to save state in RNNs?。然而,这只适用于state_is_tuple=False,但这种行为很快将由TensorFlow弃用(请参阅rnn_cell.py)。 Keras似乎有一个很好的包装,使有状态 LSTMs可能,但我不知道在TensorFlow中实现这一点的最佳方法。 TensorFlow GitHub上的这个问题也与我的问题有关:https://github.com/tensorflow/tensorflow/issues/2838

任何有关构建有状态LSTM模型的好建议?

inputs = tf.placeholder(tf.float32, shape=[None, seq_length, 84, 84], name="inputs") 
targets = tf.placeholder(tf.float32, shape=[None, seq_length], name="targets") 

num_lstm_layers = 2 

with tf.variable_scope("LSTM") as scope: 

    lstm_cell = tf.nn.rnn_cell.LSTMCell(512, initializer=initializer, state_is_tuple=True) 
    self.lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_lstm_layers, state_is_tuple=True) 

    init_c = # compute initial LSTM memory state using contents in placeholder 'inputs' 
    init_h = # compute initial LSTM hidden state using contents in placeholder 'inputs' 
    self.state = [tf.nn.rnn_cell.LSTMStateTuple(init_c, init_h)] * num_lstm_layers 

    outputs = [] 

    for step in range(seq_length): 

     if step != 0: 
      scope.reuse_variables() 

     # CNN features, as input for LSTM 
     x_t = # ... 

     # LSTM step through time 
     output, self.state = self.lstm(x_t, self.state) 
     outputs.append(output) 
+2

[Tensorflow,在RNN中保存状态的最佳方式是?]的可能重复(http://stackoverflow.com/questions/37969065/tensorflow-best-way-to-save-state-in-rnns) –

回答

17

我发现保存占位符中所有图层的整个状态是最容易的。

init_state = np.zeros((num_layers, 2, batch_size, state_size)) 

... 

state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) 

然后在使用本地tensorflow RNN Api之前解压它并创建一个LSTMStateTuples的元组。

l = tf.unpack(state_placeholder, axis=0) 
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1]) 
for idx in range(num_layers)] 
) 

RNN经过API中:

cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) 
cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True) 
outputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state) 

state - 变量然后将feeded到下一批次作为占位符。

6

Tensorflow,保存RNN中状态的最佳方法?其实是我最初的问题。下面的代码是我如何使用状态元组。

with tf.variable_scope('decoder') as scope: 
    rnn_cell = tf.nn.rnn_cell.MultiRNNCell \ 
    ([ 
     tf.nn.rnn_cell.LSTMCell(512, num_proj = 256, state_is_tuple = True), 
     tf.nn.rnn_cell.LSTMCell(512, num_proj = WORD_VEC_SIZE, state_is_tuple = True) 
    ], state_is_tuple = True) 

    state = [[tf.zeros((BATCH_SIZE, sz)) for sz in sz_outer] for sz_outer in rnn_cell.state_size] 

    for t in range(TIME_STEPS): 
     if t: 
      last = y_[t - 1] if TRAINING else y[t - 1] 
     else: 
      last = tf.zeros((BATCH_SIZE, WORD_VEC_SIZE)) 

     y[t] = tf.concat(1, (y[t], last)) 
     y[t], state = rnn_cell(y[t], state) 

     scope.reuse_variables() 

而不是使用tf.nn.rnn_cell.LSTMStateTuple我只是创建一个清单工作正常。在这个例子中,我没有保存状态。但是,你可以很容易地使状态变量不变,只是使用assign来保存值。