完全使用GPU内存时该怎么办

时间:2019-09-27 09:21:43

标签: python tensorflow keras deep-learning seq2seq

最近,我开始使用1 Tesla T4 GPU with 12 vCpu and 60 GB RAM。我正在训练Seq2Seq bidirectional LSTM with attention layer并且有38,863,916 training parameters。在训练我的Seq2Seq模型时,我得到以下错误信息:GPU同步失败。我搜索了错误并知道这意味着我的GPU内存已满。以下是我的代码

encoder_inputs = Input(shape=(max_x_len,))

emb1 = Embedding(len(x_voc), 100, weights=[x_voc], trainable = False)(encoder_inputs)

encoder = Bidirectional(LSTM(latent_dim, return_state=True, return_sequences =True))
encoder_outputs0, _, _, _, _ = encoder(emb1)

encoder = Bidirectional(LSTM(latent_dim, return_state=True, return_sequences =True))
encoder_outputs2, forward_h, forward_c, backward_h, backward_c = encoder(encoder_outputs0)

encoder_states = [forward_h, forward_c, backward_h, backward_c]


### Decoder
decoder_inputs = Input(shape=(None,))

emb2 = Embedding(len(y_voc), 100, weights=[y_voc], trainable = False)(decoder_inputs)

decoder_lstm = Bidirectional(LSTM(latent_dim, return_sequences=True, return_state=True))
decoder_outputs2, _, _, _, _ = decoder_lstm(emb2, initial_state=encoder_states)


attn_layer = AttentionLayer(name='attention_layer')
attn_out, attn_states = attn_layer([encoder_outputs2, decoder_outputs2]) 
decoder_concat_input = Concatenate(axis=-1, name='concat_layer')([decoder_outputs2, attn_out])

decoder_dense = Dense(len(y_voc), activation='softmax')
decoder_outputs = decoder_dense(decoder_concat_input)


model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

model.fit([x_inc,x_dec],y_dec, batch_size = 32, epochs=500)
x_inc.shape => (1356, 433)
x_dec.shape =>(1356, 131)
y_dec.shape =>(1356, 131, 10633)

0 个答案:

没有答案
相关问题