2016-04-02 262 views
6

我试图将预训练的word2vec向量注入到现有的tensorflow seq2seq模型中。将预先训练的word2vec向量注入到TensorFlow中seq2seq

正在关注this answer,我制作了下面的代码。但它似乎并没有像应该那样提高性能,尽管变量中的值被更新了。

在我的理解中,错误可能是由于EmbeddingWrapper或embedding_attention_decoder独立于词汇顺序创建嵌入的事实?

将预训练载体加载到tensorflow模型中的最佳方法是什么?

SOURCE_EMBEDDING_KEY = "embedding_attention_seq2seq/RNN/EmbeddingWrapper/embedding" 
TARGET_EMBEDDING_KEY = "embedding_attention_seq2seq/embedding_attention_decoder/embedding" 


def inject_pretrained_word2vec(session, word2vec_path, input_size, dict_dir, source_vocab_size, target_vocab_size): 
    word2vec_model = word2vec.load(word2vec_path, encoding="latin-1") 
    print("w2v model created!") 
    session.run(tf.initialize_all_variables()) 

    assign_w2v_pretrained_vectors(session, word2vec_model, SOURCE_EMBEDDING_KEY, source_vocab_path, source_vocab_size) 
    assign_w2v_pretrained_vectors(session, word2vec_model, TARGET_EMBEDDING_KEY, target_vocab_path, target_vocab_size) 


def assign_w2v_pretrained_vectors(session, word2vec_model, embedding_key, vocab_path, vocab_size): 
    vectors_variable = [v for v in tf.trainable_variables() if embedding_key in v.name] 
    if len(vectors_variable) != 1: 
     print("Word vector variable not found or too many. key: " + embedding_key) 
     print("Existing embedding trainable variables:") 
     print([v.name for v in tf.trainable_variables() if "embedding" in v.name]) 
     sys.exit(1) 

    vectors_variable = vectors_variable[0] 
    vectors = vectors_variable.eval() 

    with gfile.GFile(vocab_path, mode="r") as vocab_file: 
     counter = 0 
     while counter < vocab_size: 
      vocab_w = vocab_file.readline().replace("\n", "") 
      # for each word in vocabulary check if w2v vector exist and inject. 
      # otherwise dont change the value. 
      if word2vec_model.__contains__(vocab_w): 
       w2w_word_vector = word2vec_model.get_vector(vocab_w) 
       vectors[counter] = w2w_word_vector 
      counter += 1 

    session.run([vectors_variable.initializer], 
      {vectors_variable.initializer.inputs[1]: vectors}) 

回答

5

我不熟悉seq2seq例子,但一般而言,您可以使用下面的代码片段注入您的嵌入:

当你建立你图:

with tf.device("/cpu:0"): 
    embedding = tf.get_variable("embedding", [vocabulary_size, embedding_size])  
    inputs = tf.nn.embedding_lookup(embedding, input_data) 

当你执行(在构建图形之后以及说明训练之前),只需将保存的嵌入分配给嵌入变量:

session.run(tf.assign(embedding, embeddings_that_you_want_to_use)) 

这个想法是,embedding_lookup将用embedding变量中的值代替input_data值。

相关问题