Keras,嵌入和LSTMS。形状错误

时间:2019-01-30 04:02:53

标签: python tensorflow keras recurrent-neural-network

我有一个长文本,包含1,514,669个字词(26,791个唯一单词)。我已经创建了一个词典,以唯一的单词为键,单词索引为值:

 {'neighbors': 0,
 'prowlings': 1,
 'trapped': 2,
 'succeed': 3,
 'shrank': 4,
 'napkin': 5,
 'verdict': 6,
 'hosted': 7,
 'lists': 8,
 'meat': 9,
 'ation': 10,
 'captor': 11,
 'corking': 12,
 'keys': 13,
 'Sardinian': 14,
 'include': 15,
 'Tradable': 16,
 'princes': 17,
 'witnessed': 18,
 'rant': 19,
 ...}

我这样创建了一个形状为(1514669,32)的输入数组:

rnn_inputs = [word_to_index_dict[each] for each in ebooks_texts.split(' ') if each != '']
rnn_targets = rnn_inputs[1:] + [rnn_inputs[0]]

rnn_inputs = [rnn_inputs[i:i+32] for i in range(len(rnn_inputs)) if len(rnn_inputs[i:i+32]) == 32]
rnn_targets = [rnn_targets[i:i+32] for i in range(len(rnn_targets)) if len(rnn_targets[i:i+32]) == 32]

rnn_inputs = np.array(rnn_inputs)
rnn_targets = np.array(rnn_targets)

因此,对于每个数组行,我有32个单词。第一行代表0-31,第二行代表1-32,依此类推。

重点是获得下一个单词的预测。

模型架构为:

model = Sequential()

model.add(Embedding(len(word_to_index_dict), 128, input_length=32))
model.add(LSTM(units=128, return_sequences=True))
model.add(Dense(len(word_to_index_dict), activation='softmax'))

model.summary()
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics = ['accuracy'])
checkpointer = ModelCheckpoint(filepath='models/best-weights.hdf5', verbose=1, save_best_only=True)
model.fit(rnn_inputs, rnn_targets, batch_size=1, epochs=1, validation_split=.2, callbacks=[checkpointer], verbose=1)

我得到以下摘要和错误:

Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 32, 128)           3429248   
_________________________________________________________________
lstm_1 (LSTM)                (None, 32, 128)           131584    
_________________________________________________________________
dense_1 (Dense)              (None, 32, 26791)         3456039   
=================================================================
Total params: 7,016,871
Trainable params: 7,016,871
Non-trainable params: 0
_________________________________________________________________
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-63ea81786e79> in <module>
    117 checkpointer = ModelCheckpoint(filepath='models/best-weights.hdf5', verbose=1, save_best_only=True)
    118 
--> 119 model.fit(rnn_inputs, rnn_targets, batch_size=1, epochs=1, validation_split=.2, callbacks=[checkpointer], verbose=1)
    120 

~/miniconda3/envs/tf-cpu/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
    950             sample_weight=sample_weight,
    951             class_weight=class_weight,
--> 952             batch_size=batch_size)
    953         # Prepare validation data.
    954         do_validation = False

~/miniconda3/envs/tf-cpu/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    787                 feed_output_shapes,
    788                 check_batch_axis=False,  # Don't enforce the batch size.
--> 789                 exception_prefix='target')
    790 
    791             # Generate sample-wise weight values given the `sample_weight` and

~/miniconda3/envs/tf-cpu/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    126                         ': expected ' + names[i] + ' to have ' +
    127                         str(len(shape)) + ' dimensions, but got array '
--> 128                         'with shape ' + str(data_shape))
    129                 if not check_batch_axis:
    130                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_1 to have 3 dimensions, but got array with shape (1514669, 32)

我正在使用Google和Google文档,但找不到我的错误的解决方案。关于我在做什么错的任何想法吗?

我正在使用python 3.6和Ubuntu 18。

1 个答案:

答案 0 :(得分:1)

您似乎没有对目标进行热编码。现在,您的目标的形状为(1514669, 32),但是形状应该为(1514669, 32, vocab_size)(每个词组32个单词中的每个单词都经过一个热编码),以便与您的输出层兼容。

或者,您可以将sparse_categorical_crossentropy作为损失而不是categorical_crossentropy来编译模型。在这种情况下,您的目标应该具有(1514669, 32, 1)的形状,并且不需要是一个热编码的代码。