2017-07-06 152 views
1
model = Sequential() 

    model.add(Embedding(630, 210)) 
    model.add(LSTM(1024, dropout = 0.2, return_sequences = True)) 
    model.add(LSTM(1024, dropout = 0.2, return_sequences = True)) 
    model.add(Dense(210, activation = 'softmax')) 

    model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) 

    filepath = 'ner_2-{epoch:02d}-{loss:.5f}.hdf5' 
    checkpoint = ModelCheckpoint(filepath, monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min') 
    callback_list = [checkpoint] 

    model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)  

X:输入矢量是形状(204564,630,1)keras LSTM模型的输入和输出的尺寸不匹配

Y的:目标矢量是形状的(204564,210,1)

即,每630个投入210个输出已经被预测但代码抛出上编译以下错误

ValueError        Traceback (most recent call last) 
<ipython-input-57-05a6affb6217> in <module>() 
    50 callback_list = [checkpoint] 
    51 
---> 52 model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list) 
    53 print('successful') 



ValueError: Error when checking model input: expected embedding_8_input to have 2 dimensions, but got array with shape (204564, 630, 1) 

请别人解释为什么这个错误发生,以及如何解决这个

+3

你有一个尺寸太多了,为什么它(204564,630,1)而不仅仅是(204564,630)? –

+0

实际上 - 对于630个序列元素中的每一个,您都有210个预测中的一个。您对整个序列有210个预测。你能详细说明你的'y'代表什么吗? –

+0

它似乎也错误地定义了“嵌入”层。你的词汇量和你想要的嵌入暗淡是什么? –

回答

1

该消息表示:

你的第一层期待与2个维度的输入:(BATCHSIZE,SomeOtherDimension)。但是你的输入有三个维度(BatchSize = 204564,SomeOtherDimension = 630,1)。

嗯......从您输入删除1,或重塑它的模型内部:

解决方案1 ​​ - 从输入中删除它:

X = X.reshape((204564,630)) 

解决方案2 - 添加一个重塑层:

model = Sequential() 
model.add(Reshape((630,),input_shape=(630,1))) 
model.add(Embedding.....)