如何解决损失= Keras LSTM网络中的Nan问题?

时间:2016-10-01 21:04:24

标签: machine-learning tensorflow deep-learning keras lstm

我正在使用Keras训练LSTM网络,并将tensorflow作为后端。该网络用于能量负荷预测,数据集的大小为(32292,24)。但是随着程序的运行,我从第一个时代就得到了Nan值的损失。我该如何解决这个问题?

PS:就数据预处理而言,我将每个值除以100000,因为最初每个值是4位或5位数。因此,我的值应该在(0,1)范围内。

def build_model():
    model = Sequential()
    layers = [1, 50, 100, 1]
    model.add(LSTM(input_dim=layers[0],output_dim=layers[1],return_sequenc
    es = True))     
    model.add(Dropout(0.2))
    model.add(LSTM(layers[2],return_sequences = False))
    model.add(Dropout(0.2))
    model.add(Dense(output_dim=layers[3]))
    model.add(Activation("linear"))

    start = time.time()
    model.compile(loss="mse", optimizer="rmsprop")
    print "Compilation Time : ", time.time() - start
return model
def run_network():
    global_start_time = time.time()
    epochs = 5000
    model = build_model()
    try:
        model.fit(x_train, y_train,batch_size = 400, nb_epoch=epochs,validation_split=0.05) 
        predicted = model.predict(x_test)
        predicted = np.reshape(predicted, (predicted.size,))
        except KeyboardInterrupt:
        print 'Training duration (s) : ', time.time() - global_start_time
    try:
        fig = plt.figure()
        ax = fig.add_subplot(111)
        ax.plot(predicted[:100])
        plt.show()
    except Exception as e:
          print str(e)
          print 'Training duration (s) : ' , time.time() -   global_start_time

return model, y_test, predicted

2 个答案:

答案 0 :(得分:0)

我将密集层的激活功能更改为“ softmax”(在我的情况下,它是关于多类分类的),并且有效。

答案 1 :(得分:0)

对我来说,我将激活函数更改为线性并且有效!