Keras LSTM - 验证损失从时代#1增加

时间:2018-01-31 12:40:37

标签: python machine-learning keras deep-learning data-science

我目前正在进行我的第一个真实的'预测股票走势的DL项目(意外)。我知道我1000:1做了一些有用的东西,但我很享受它,并希望看到它,我在尝试这个过程的几周内学到了比以前更多的知识。完成MOOC的6个月。

我正在使用Keras构建LSTM,目前正在预测下一步向前,并尝试将任务作为分类(上/下/稳定),现在作为回归问题。两者都导致类似的障碍,因为我的验证损失永远不会从第1纪元改进。

我可以让模型过度拟合,使得MSE的训练损失接近零(如果分类,则为100%准确度),但在任何阶段,验证损失都不会减少。这种尖叫过度适应我未经训练的眼睛所以我添加了不同数量的辍学,但所有这一切都扼杀了模型/训练准确性的学习,并且没有显示验证准确性的改进。

我试图改变大量的超参数 - 学习率,优化器,批量大小,回顾窗口,#layer,#unit,dropout,#samples等,也试用了数据子集和功能子集但我只是不能让它发挥作用,所以我非常感谢任何帮助。

Example graph with no dropout

下面的代码(我不知道):

# Import saved full dataframe ~ 200 features
import feather
df = feather.read_dataframe('df_feathered')
df.set_index('time', inplace=True)

# Difference the dataset to make stationary
df = df.diff(periods=1, axis=0)

# MAKE LARGE SAMPLE FOR TESTING
df_train = df.loc['2017-3-1':'2017-6-30']
df_val = df.loc['2017-7-1':'2017-8-31']
df_test = df.loc['2017-9-1':'2017-9-30']

# Make x_train, x_val sets by dropping target variable
x_train = df_train.drop('close+1', axis=1)
x_val = df_val.drop('close+1', axis=1)

# Scale the training data first then fit the transform to the test set
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_val)

# scaler = MinMaxScaler(feature_range=(0,1))
# x_train = scaler.fit_transform(df_train1)
# x_test = scaler.transform(df_val1)

# Create y_train, y_test, simply target variable for regression
y_train = df_train['close+1']
y_test = df_val['close+1']

# Define Lookback window for LSTM input
sliding_window = 15

# Convert x_train, x_test, y_train, y_test into 3d array (samples, 
timesteps, features) for LSTM input
dataXtrain = []
for i in range(len(x_train)-sliding_window-1):
        a = x_train[i:(i+sliding_window), 0:(x_train.shape[1])]
        dataXtrain.append(a)

dataXtest = []
for i in range(len(x_test)-sliding_window-1):
        a = x_test[i:(i+sliding_window), 0:(x_test.shape[1])]
        dataXtest.append(a)

dataYtrain = []
for i in range(len(y_train)-sliding_window-1):
        dataYtrain.append(y_train[i + sliding_window])

dataYtest = []
for i in range(len(y_test)-sliding_window-1):
        dataYtest.append(y_test[i + sliding_window])

# Make data the divisible by a variety of batch_sizes for training
# Started at 1000 to not include replaced NaN values
dataXtrain = np.array(dataXtrain[1000:172008])
dataYtrain = np.array(dataYtrain[1000:172008])
dataXtest = np.array(dataXtest[1000:83944])
dataYtest = np.array(dataYtest[1000:83944])

# Checking input shapes
print('dataXtrain size is: {}'.format((dataXtrain).shape))
print('dataXtest size is: {}'.format((dataXtest).shape))
print('dataYtrain size is: {}'.format((dataYtrain).shape))
print('dataYtest size is: {}'.format((dataYtest).shape))

### ACTUAL LSTM MODEL

batch_size = 256
timesteps = dataXtrain.shape[1]
features = dataXtrain.shape[2]

# Model set-up, stacked 4 layer stateful LSTM
model = Sequential()
model.add(LSTM(512, return_sequences=True, stateful=True, 
               batch_input_shape=(batch_size, timesteps, features)))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(128,stateful=True))
model.add(Dense(1, activation='linear'))         

model.summary()

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=5, min_lr=0.000001, verbose=1)

def coeff_determination(y_true, y_pred):
    from keras import backend as K
    SS_res =  K.sum(K.square( y_true-y_pred ))
    SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
    return ( 1 - SS_res/(SS_tot + K.epsilon()) )

model.compile(loss='mse',
              optimizer='nadam',
              metrics=[coeff_determination,'mse','mae','mape'])

history = model.fit(dataXtrain, dataYtrain,validation_data=(dataXtest, dataYtest),
          epochs=100,batch_size=batch_size, shuffle=False, verbose=1, callbacks=[reduce_lr])

score = model.evaluate(dataXtest, dataYtest,batch_size=batch_size, verbose=1)
print(score)

predictions = model.predict(dataXtest, batch_size=batch_size)
print(predictions)

import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(history.history['mean_squared_error'])
#plt.plot(history.history['val_mean_squared_error'])
plt.plot(history.history['coeff_determination'])
plt.plot(history.history['val_coeff_determination'])
#plt.plot(history.history['mean_absolute_error'])
#plt.plot(history.history['mean_absolute_percentage_error'])
#plt.plot(history.history['val_mean_absolute_percentage_error'])
#plt.title("MSE")
plt.ylabel("R2")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.plot(history.history["loss"][5:])
plt.plot(history.history["val_loss"][5:])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.figure(figsize=(20,8))
plt.plot(dataYtest)
plt.plot(predictions)
plt.title("Prediction")
plt.ylabel("Price")
plt.xlabel("Time")
plt.legend(["Truth", "Prediction"], loc="best")
plt.show()

2 个答案:

答案 0 :(得分:1)

尽量减少学习率(并暂时删除辍学)。

为什么使用

shuffle=False

在fit()函数中?

答案 1 :(得分:1)

也许您应该记住,您在预测袜子的回报,这很可能无法预测。因此,if ( function_exists( 'add_image_size' ) ) { add_image_size( 'img4k', 3840, 99999999 ); } 的增加完全不会过度拟合。与其添加更多的辍学,也许您应该考虑添加更多的图层以提高其功能。