在Keras中传递输入时正确处理批处理尺寸

时间:2019-12-24 07:08:46

标签: keras nlp lstm

我的keras输入的形状为(83194,34,30),但是要传递模型,我只需要(34,30)[83194是否。的例子]。我可以进行哪些更改? 1.以数据形状表示还是以2.输入给keras的输入类型?

接收到错误:ValueError:检查输入时出错:预期input_6具有2维,但数组的形状为(83194,34,30)

代码:

n_units = 50
src_vocab = tar_vocab = 30
src_timesteps = a
tar_timesteps = b
_input_ = Input(shape=[src_timesteps], dtype='int32')
embedding = Embedding(input_dim = src_vocab, output_dim = n_units, input_length=src_timesteps, mask_zero=False)(_input_)
layer_1 = LSTM(n_units)(embedding)
repeat = RepeatVector(b)(layer_1)
layer_2 = LSTM(n_units, return_sequences=True)(repeat)
final = TimeDistributed(Dense(tar_vocab, activation='softmax'))(layer_2)
model = Model(inputs = _input_, output= final)
model.compile(optimizer='adam', loss='categorical_crossentropy')
print(model.summary())

plot_model(model, to_file='model.png', show_shapes=True)
filename = 'model.h5'
checkpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
model.fit(train_, test_, epochs=30, callbacks=[checkpoint], verbose=2)

train_的形状为(83194,34,30),test_的形状为(83194,79,30)。

0 个答案:

没有答案