如何实施"多向" LSTMs?

时间:2016-12-12 21:51:03

标签: deep-learning keras lstm

我试图从论文中实施这种LSTM架构" Dropout改进了用于手写识别的回归神经网络":Architecture from the paper Dropout improves Recurrent Neural Networks for Handwriting Recognition

在论文中,研究人员将多向LSTM层定义为"四个并行应用的LSTM层,每个层都具有特定扫描方向"

以下是我认为网络在Keras中的样子:

from keras.layers import LSTM, Dropout, Input, Convolution2D, Merge, Dense, Activation, TimeDistributed
from keras.models import Sequential

def build_lstm_dropout(inputdim, outputdim, return_sequences=True, activation='tanh'):
    net_input = Input(shape=(None, inputdim))
    model = Sequential()
    lstm  = LSTM(output_dim=outputdim, return_sequences=return_sequences, activation=activation)(net_input)
    model.add(lstm)
    model.add(Dropout(0.5))
    return model

def build_conv(nb_filter, nb_row, nb_col, net_input, border_mode='relu'):
    return TimeDistributed(Convolution2D( nb_filter, nb_row, nb_col, border_mode=border_mode, activation='relu')(net_input))

def build_lstm_conv(lstm, conv):
    model = Sequential()
    model.add(lstm)
    model.add(conv)
    return model

def build_merged_lstm_conv_layer(lstm_conv, mode='concat'):
    return Merge([lstm_conv, lstm_conv, lstm_conv, lstm_conv], mode=mode)

def build_model(feature_dim, loss='ctc_cost_for_train', optimizer='Adadelta'):
    net_input = Input(shape=(1, feature_dim, None))

    lstm = build_lstm_dropout(2, 6)
    conv = build_conv(64, 2, 4, net_input)

    lstm_conv = build_lstm_conv(lstm, conv)

    first_layer = build_merged_lstm_conv_layer(lstm_conv)

    lstm = build_lstm_dropout(10, 20)
    conv = build_conv(128, 2, 4, net_input)

    lstm_conv = build_lstm_conv(lstm, conv)

    second_layer = build_merged_lstm_conv_layer(lstm_conv)

    lstm = build_lstm_dropout(50, 1)
    fully_connected = Dense(1, activation='sigmoid')

    lstm_fc = Sequential()
    lstm_fc.add(lstm)
    lstm_fc.add(fully_connected)

    third_layer = Merge([lstm_fc, lstm_fc, lstm_fc, lstm_fc], mode='concat')

    final_model = Sequential()
    final_model.add(first_layer)
    final_model.add(Activation('tanh'))
    final_model.add(second_layer)
    final_model.add(Activation('tanh'))
    final_model.add(third_layer)

    final_model.compile(loss=loss, optimizer=optimizer, sample_weight_mode='temporal')

    return final_model

以下是我的问题:

  1. 如果我的架构实施是正确的,你是怎么做的 实现四个LSTM层的扫描方向?
  2. 如果我的实施不正确,是否可以实施 在Keras这样的建筑?如果没有,是否有任何其他框架可以帮助我实现这样的架构?

1 个答案:

答案 0 :(得分:2)

您可以检查this是否实现双向LSTM。基本上,您只需为后向LSTM设置go_backwards=True

但是,在您的情况下,您必须编写"镜像" +重塑图层以反转行。镜像层可以看起来像(为方便起见,我在这里使用lambda图层):Lambda(lambda x: x[:,::-1,:])