Keras的最佳嵌入层尺寸

时间:2020-07-10 05:25:27

标签: python python-3.x numpy keras nlp

我正在训练LSTM模型,并将自定义的BERT Word嵌入提供给Keras嵌入层。我在嵌入层中添加的嵌入矩阵的尺寸为30,576 x 24,576。但是,当我将此矩阵传递给嵌入层时,由于ram射击达到最大限制,google colab崩溃了。

我怎么做才能使嵌入层接受此矩阵?

这是我的代码

max_words = 30576
max_len = 24576
tok = Tokenizer(num_words = max_words)
tok.fit_on_texts(X_train)
sequences = tok.texts_to_sequences(X_train)
sequences_matrix = sequence.pad_sequences(sequences, maxlen = max_len)
Y_train = np.array(Y_train)
Y_test = np.array(Y_test)

model = Sequential()
model.add(Embedding(max_words, 768, input_length=max_len, weights=[embedding]))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(SpatialDropout1D(0.5))
model.add(Conv1D(32, kernel_size=3, activation='relu'))
model.add(Bidirectional(LSTM(32)))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()

0 个答案:

没有答案
相关问题