保存和恢复Keras BLSTM CTC模型

时间:2017-11-14 10:29:11

标签: python tensorflow neural-network keras lstm

我一直致力于语音情感识别深度神经网络。我使用了具有CTC损失的keras双向LSTM。我训练了模型并保存了它

model_json = model.to_json() with open("ctc_model.json", "w") as json_file: json_file.write(model_json) model.save_weights("ctc_weights.h5")

问题是我不能使用这个模型来测试看不见的数据,因为模型接受4个参数作为输入并计算ctc损失。只需构建模型和训练。那么我怎样才能在需要一个输入的情况下保存模型。不是标签和长度。基本上我如何将模型保存为此函数test_func = K.function([net_input], [output])

def ctc_lambda_func(args):
    y_pred, labels, input_length, label_length = args

   shift = 2
   y_pred = y_pred[:, shift:, :]
   input_length -= shift
   return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
def build_model(nb_feat, nb_class, optimizer='Adadelta'):
    net_input = Input(name="the_input", shape=(200, nb_feat))
    forward_lstm1  = LSTM(output_dim=64, 
                      return_sequences=True, 
                      activation="tanh"
                     )(net_input)
    backward_lstm1 = LSTM(output_dim=64, 
                      return_sequences=True, 
                      activation="tanh",
                      go_backwards=True
                     )(net_input)
    blstm_output1  = Merge(mode='concat')([forward_lstm1, backward_lstm1])

    forward_lstm2  = LSTM(output_dim=64, 
                      return_sequences=True, 
                      activation="tanh"
                     )(blstm_output1)
    backward_lstm2 = LSTM(output_dim=64, 
                      return_sequences=True, 
                      activation="tanh",
                      go_backwards=True
                     )(blstm_output1)
    blstm_output2  = Merge(mode='concat')([forward_lstm2, backward_lstm2])

    hidden = TimeDistributed(Dense(512, activation='tanh'))(blstm_output2)
    output = TimeDistributed(Dense(nb_class + 1, activation='softmax')) (hidden)

    labels = Input(name='the_labels', shape=[1], dtype='float32')
    input_length = Input(name='input_length', shape=[1], dtype='int64')
    label_length = Input(name='label_length', shape=[1], dtype='int64')
    loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name="ctc")([output, labels, input_length, label_length])
    model = Model(input=[net_input, labels, input_length, label_length], output=[loss_out])
    model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer, metrics=[])

    test_func = K.function([net_input], [output])

    return model, test_func
model, test_func = build_model(nb_feat=nb_feat, nb_class=nb_class, optimizer=optimizer)
 for epoch in range(number_epoches):
     inputs_train = {'the_input': X_train[i:i+batch_size],
                    'the_labels': y_train[i:i+batch_size],
                    'input_length': np.sum(X_train_mask[i:i+batch_size], axis=1, dtype=np.int32),
                    'label_length': np.squeeze(y_train_mask[i:i+batch_size]),
                   }
     outputs_train = {'ctc': np.zeros([inputs_train["the_labels"].shape[0]])}

    ctcloss = model.train_on_batch(x=inputs_train, y=outputs_train)

    total_ctcloss += ctcloss * inputs_train["the_input"].shape[0] * 1.
loss_train[epoch] = total_ctcloss / X_train.shape[0]

enter image description here

Here is the my model summary

1 个答案:

答案 0 :(得分:1)

尝试以下解决方案:

import keras.backend as K

def get_prediction_function(model):
    input_tensor = model.layers[0].input
    output_tensor = model.layers[-5].output
    net_function = K.function([input_tensor, K.learning_phase()], [output_tensor])
    def _result_function(x):
        return net_function([x, 0])[0]
    return _result_function

现在您的网络功能可以通过以下方式获得:

test_function = get_prediction_function(model)