1
model = Sequential()
model.add(Embedding(630, 210))
model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
model.add(Dense(210, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
filepath = 'ner_2-{epoch:02d}-{loss:.5f}.hdf5'
checkpoint = ModelCheckpoint(filepath, monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min')
callback_list = [checkpoint]
model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)
X:输入矢量是形状(204564,630,1)keras LSTM模型的输入和输出的尺寸不匹配
Y的:目标矢量是形状的(204564,210,1)
即,每630个投入210个输出已经被预测但代码抛出上编译以下错误
ValueError Traceback (most recent call last)
<ipython-input-57-05a6affb6217> in <module>()
50 callback_list = [checkpoint]
51
---> 52 model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)
53 print('successful')
ValueError: Error when checking model input: expected embedding_8_input to have 2 dimensions, but got array with shape (204564, 630, 1)
你有一个尺寸太多了,为什么它(204564,630,1)而不仅仅是(204564,630)? –
实际上 - 对于630个序列元素中的每一个,您都有210个预测中的一个。您对整个序列有210个预测。你能详细说明你的'y'代表什么吗? –
它似乎也错误地定义了“嵌入”层。你的词汇量和你想要的嵌入暗淡是什么? –