在keras中的CNN和RNN模型的集合

时间:2017-09-13 02:44:45

标签: nlp deep-learning keras rnn

尝试在keras中使用纸张Ensemble Application of Convolutional and Recurrent Neural Networks for Multi-label Text Categorization实现模型

该模型如下所示(摘自论文) enter image description here

我的代码为

document_input = Input(shape=(None,), dtype='int32')
embedding_layer = Embedding(vocab_size, WORD_EMB_SIZE, weights=[initial_embeddings], 
                                input_length=DOC_SEQ_LEN, trainable=True)
convs = []
filter_sizes = [2,3,4,5]

doc_embedding = embedding_layer(document_input)
for filter_size in filter_sizes:
    l_conv = Conv1D(filters=256, kernel_size=filter_size, padding='same', activation='relu')(doc_embedding)
    l_pool = MaxPooling1D(filter_size)(l_conv)
    convs.append(l_pool)

l_merge = Concatenate(axis=1)(convs)
l_flat = Flatten()(l_merge)
l_dense = Dense(100, activation='relu')(l_flat)
l_dense_3d = Reshape((1,int(l_dense.shape[1])))(l_dense)

gene_variation_input = Input(shape=(None,), dtype='int32')
gene_variation_embedding = embedding_layer(gene_variation_input)
rnn_layer = LSTM(100, return_sequences=False, stateful=True)(gene_variation_embedding,initial_state=[l_dense_3d])

l_flat = Flatten()(rnn_layer)
output_layer = Dense(9, activation='softmax')(l_flat)
model = Model(inputs=[document_input,gene_variation_input], outputs=[output_layer])

我不知道我是否正确设置上图中的文字特征向量!我试过,我得到了错误

ValueError: Layer lstm_9 expects 3 inputs, but it received 2 input tensors. Input received: [<tf.Tensor 'embedding_10_1/Gather:0' shape=(?, ?, 200) dtype=float32>, <tf.Tensor 'reshape_9/Reshape:0' shape=(?, 1, 100) dtype=float32>]

我的确遵循了keras documentationcode

关于指定RNN初始状态的说明的部分

任何帮助表示感谢。

更新 建议和更多阅读代码模型看起来像这样

embedding_layer = Embedding(vocab_size, WORD_EMB_SIZE, weights=[initial_embeddings], trainable=True)

document_input = Input(shape=(DOC_SEQ_LEN,), batch_shape=(BATCH_SIZE, DOC_SEQ_LEN),dtype='int32')
doc_embedding = embedding_layer(document_input)

convs = []
filter_sizes = [2,3,4,5]

for filter_size in filter_sizes:
    l_conv = Conv1D(filters=256, kernel_size=filter_size, padding='same', activation='relu')(doc_embedding)
    l_pool = MaxPooling1D(filter_size)(l_conv)
    convs.append(l_pool)

l_merge = Concatenate(axis=1)(convs)
l_flat = Flatten()(l_merge)
l_dense = Dense(100, activation='relu')(l_flat)

gene_variation_input = Input(shape=(GENE_VARIATION_SEQ_LEN,), batch_shape=(BATCH_SIZE, GENE_VARIATION_SEQ_LEN),dtype='int32')
gene_variation_embedding = embedding_layer(gene_variation_input)

rnn_layer = LSTM(100, return_sequences=False, 
                 batch_input_shape=(BATCH_SIZE, GENE_VARIATION_SEQ_LEN, WORD_EMB_SIZE),
                 stateful=False)(gene_variation_embedding, initial_state=[l_dense, l_dense])

output_layer = Dense(9, activation='softmax')(rnn_layer)

model = Model(inputs=[document_input,gene_variation_input], outputs=[output_layer])

模型摘要

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_8 (InputLayer)             (32, 9)               0                                            
____________________________________________________________________________________________________
input_7 (InputLayer)             (32, 4000)            0                                            
____________________________________________________________________________________________________
embedding_6 (Embedding)          multiple              73764400    input_7[0][0]                    
                                                                   input_8[0][0]                    
____________________________________________________________________________________________________
conv1d_13 (Conv1D)               (32, 4000, 256)       102656      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_14 (Conv1D)               (32, 4000, 256)       153856      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_15 (Conv1D)               (32, 4000, 256)       205056      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_16 (Conv1D)               (32, 4000, 256)       256256      embedding_6[0][0]                
____________________________________________________________________________________________________
max_pooling1d_13 (MaxPooling1D)  (32, 2000, 256)       0           conv1d_13[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_14 (MaxPooling1D)  (32, 1333, 256)       0           conv1d_14[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_15 (MaxPooling1D)  (32, 1000, 256)       0           conv1d_15[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_16 (MaxPooling1D)  (32, 800, 256)        0           conv1d_16[0][0]                  
____________________________________________________________________________________________________
concatenate_4 (Concatenate)      (32, 5133, 256)       0           max_pooling1d_13[0][0]           
                                                                   max_pooling1d_14[0][0]           
                                                                   max_pooling1d_15[0][0]           
                                                                   max_pooling1d_16[0][0]           
____________________________________________________________________________________________________
flatten_4 (Flatten)              (32, 1314048)         0           concatenate_4[0][0]              
____________________________________________________________________________________________________
dense_6 (Dense)                  (32, 100)             131404900   flatten_4[0][0]                  
____________________________________________________________________________________________________
lstm_4 (LSTM)                    (32, 100)             120400      embedding_6[1][0]                
                                                                   dense_6[0][0]                    
                                                                   dense_6[0][0]                    
____________________________________________________________________________________________________
dense_7 (Dense)                  (32, 9)               909         lstm_4[0][0]                     
====================================================================================================
Total params: 206,008,433
Trainable params: 206,008,433
Non-trainable params: 0
____________________________________________________________________________________________________

1 个答案:

答案 0 :(得分:2)

LSTM有2个隐藏状态,但您只提供1个初始状态。您可以执行以下操作之一:

将LSTM替换为只有1个隐藏状态的RNN,例如GRU:

rnn_layer = GRU(100, return_sequences=False, stateful=True)
(gene_variation_embedding,initial_state=[l_dense_3d])

或者将零作为LSTM的第二个隐藏状态的初始状态传递:

zeros = Lambda(lambda x: K.zeros_like(x), output_shape=lambda s: s)(l_dense_3d)
rnn_layer = LSTM(100, return_sequences=False, stateful=True)
(gene_variation_embedding,initial_state=[l_dense_3d, zeros])