2017-05-22 55 views
1

数据集描述连体模型LSTM网络出现故障时使用tensorflow

数据集包含了一组问题对,如果问题是相同告诉标签训练。例如

“我如何阅读和查找我的YouTube评论?” ,“我怎样才能看到我所有的 Youtube评论?” ,“1”

该模型的目标是确定给定的问题对是相同还是不同。

方法

我创建了一个Siamese network来识别,如果两个问题都是一样的。下面是该模型:

with graph.as_default(): 
    diff = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(question1_outputs[:, -1, :], question2_outputs[:, -1, :])), reduction_indices=1)) 

    margin = tf.constant(1.) 
    labels = tf.to_float(labels) 
    match_loss = tf.expand_dims(tf.square(diff, 'match_term'), 0) 
    mismatch_loss = tf.expand_dims(tf.maximum(0., tf.subtract(margin, tf.square(diff)), 'mismatch_term'), 0) 

    loss = tf.add(tf.matmul(labels, match_loss), tf.matmul((1 - labels), mismatch_loss), 'loss_add') 
    distance = tf.reduce_mean(loss) 

    optimizer = tf.train.AdamOptimizer(learning_rate).minimize(distance) 

以下是代码训练模型:

with graph.as_default(): 
    saver = tf.train.Saver() 

with tf.Session(graph=graph) as sess: 
    sess.run(tf.global_variables_initializer(), feed_dict={embedding_placeholder: embedding_matrix}) 

    iteration = 1 
    for e in range(epochs): 
     summary_writer = tf.summary.FileWriter('/Users/mithun/projects/kaggle/quora_question_pairs/logs', sess.graph) 
     summary_writer.add_graph(sess.graph) 

     for ii, (x1, x2, y) in enumerate(get_batches(question1_train, question2_train, label_train, batch_size), 1): 
      feed = {question1_inputs: x1, 
        question2_inputs: x2, 
        labels: y[:, None], 
        keep_prob: 0.9 
        } 
      loss1 = sess.run([distance], feed_dict=feed) 

      if iteration%5==0: 
       print("Epoch: {}/{}".format(e, epochs), 
         "Iteration: {}".format(iteration), 
         "Train loss: {:.3f}".format(loss1)) 

      if iteration%50==0: 
       val_acc = [] 
       for x1, x2, y in get_batches(question1_val, question2_val, label_val, batch_size): 
        feed = {question1_inputs: x1, 
          question2_inputs: x2, 
          labels: y[:, None], 
          keep_prob: 1 
          } 
        batch_acc = sess.run([accuracy], feed_dict=feed) 
        val_acc.append(batch_acc) 
       print("Val acc: {:.3f}".format(np.mean(val_acc))) 
      iteration +=1 

    saver.save(sess, "checkpoints/quora_pairs.ckpt") 

我已经训练上述模型

graph = tf.Graph() 

with graph.as_default(): 
    embedding_placeholder = tf.placeholder(tf.float32, shape=embedding_matrix.shape, name='embedding_placeholder') 
    with tf.variable_scope('siamese_network') as scope: 
     labels = tf.placeholder(tf.int32, [batch_size, None], name='labels') 
     keep_prob = tf.placeholder(tf.float32, name='question1_keep_prob') 

     with tf.name_scope('question1') as question1_scope: 
      question1_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question1_inputs') 

      question1_embedding = tf.get_variable(name='embedding', initializer=embedding_placeholder, trainable=False) 
      question1_embed = tf.nn.embedding_lookup(question1_embedding, question1_inputs) 

      question1_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) 
      question1_drop = tf.contrib.rnn.DropoutWrapper(question1_lstm, output_keep_prob=keep_prob) 
      question1_multi_lstm = tf.contrib.rnn.MultiRNNCell([question1_drop] * lstm_layers) 

      q1_initial_state = question1_multi_lstm.zero_state(batch_size, tf.float32) 

      question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=q1_initial_state) 

     scope.reuse_variables() 

     with tf.name_scope('question2') as question2_scope: 
      question2_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question2_inputs') 

      question2_embedding = question1_embedding 
      question2_embed = tf.nn.embedding_lookup(question2_embedding, question2_inputs) 

      question2_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) 
      question2_drop = tf.contrib.rnn.DropoutWrapper(question2_lstm, output_keep_prob=keep_prob) 
      question2_multi_lstm = tf.contrib.rnn.MultiRNNCell([question2_drop] * lstm_layers) 

      q2_initial_state = question2_multi_lstm.zero_state(batch_size, tf.float32) 

      question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=q2_initial_state) 

使用RNN输出计算余弦距离与约10,000标记的数据。但是,准确度在0.630左右停滞不前,奇怪的是所有迭代中的验证准确性都是相同的。

lstm_size = 64 
lstm_layers = 1 
batch_size = 128 
learning_rate = 0.001 

我创建模型的方式有什么问题吗?

+0

一个很好的调试第一遍:使网络完全线性化并将其适用于一个或两个简单的例子。一旦它适合(令人惊讶的是它不会),慢慢重新引入非线性。由于学习任务是微不足道的,您可以将缓慢或不存在的学习归因于死亡/饱和的非线性。 –

+0

很难说准确度如何(我不熟悉数据集或体系结构),但有几件事。不知道为什么你不想学习你的嵌入,但是你应该说'可训练=假',而不是'可训练='假'',这将不起作用。另外,它不应该受到伤害,但是如果稍后将它放在两个不同的地方,我认为你不需要'scope.reuse_variables()'或'tf.sqrt'作为'diff'。 – jdehesa

+0

我已经用简要的数据集描述和模型的目标更新了这个问题。 1)因为我正在使用预先训练的单词嵌入,所以我设置了“可训练=假”。 2)我在这里使用Siamese网络,在高层它涉及两个相同的网络使用相同的权重,然后我们找到两个网络输出之间的距离。如果距离小于阈值,那么它们是相同的,否则不是。因此我使用了'scope.reuse_varables'。 – Mithun

回答

1

这是一个非平衡数据集的常见问题,例如您正在使用的最近发布的Quora数据集。由于Quora数据集不平衡(〜63%的负数和〜37%的正数示例),您需要对权重进行适当的初始化。如果不进行初始化,您的解决方案将会停留在局部最小值,并且它将训练以仅预测负面类别。因此,准确率为63%,因为这是验证数据中“不相似”问题的百分比。如果你检查你的验证集上得到的结果,你会注意到它预测了所有的零。 He等人提出的截断的正态分布http://arxiv.org/abs/1502.01852是初始化权重的良好替代方案。