训练准确度在提高,但测试准确度没有提高

时间:2021-01-08 08:49:49

标签: python time-series lstm prediction stock

我试图预测标准普尔 500 指数是否会上涨/下跌。我正在使用 LSTM。现在问题来了。训练准确度在提高,但测试准确度没有提高。此外,大多数时候它要么预测全 0(向下)要么预测全 1(向上),因此测试准确率介于 44%(对应全 0)和 56%(对应全 1)之间。

LAYERS = [40, 20, 15, 8, 8, 1]                # number of units in hidden and output layers
M_TRAIN = X_train.shape[0]           # number of training examples (2D)
M_TEST = X_test.shape[0]             # number of test examples (2D),full=X_test.shape[0]
N = X_train.shape[2]                 # number of features
BATCH = 10                         # batch size
EPOCH = 20                     # number of epochs
LR = 20                          # learning rate of the gradient descent
LAMBD = 0                      # lambda in L2 regularizaion
DP = 0                             # dropout rate
RDP = 0.0                            # recurrent dropout rate
print(f'layers={LAYERS}, train_examples={M_TRAIN}, test_examples={M_TEST}')
print(f'batch = {BATCH}, timesteps = {T}, features = {N}, epochs = {EPOCH}')
print(f'lr = {LR}, lambda = {LAMBD}, dropout = {DP}, recurr_dropout = {RDP}')

# Build the Model
model = Sequential()
model.add(LSTM(input_shape=(T, N), units=LAYERS[0],
               activation='tanh', recurrent_activation='hard_sigmoid',
               kernel_regularizer=l2(LAMBD), recurrent_regularizer=l2(LAMBD),
               dropout=DP, recurrent_dropout=RDP,
               return_sequences=True, return_state=False,
               stateful=False, unroll=False
              ))
#model.add(BatchNormalization())
#model.add(Dropout(0.2))
model.add(LSTM(units=LAYERS[1],
               activation='tanh', recurrent_activation='hard_sigmoid',
               kernel_regularizer=l2(LAMBD), recurrent_regularizer=l2(LAMBD),
               dropout=DP, recurrent_dropout=RDP,
               return_sequences=True, return_state=False,
               stateful=False, unroll=False
              ))
#model.add(BatchNormalization())
model.add(LSTM(units=LAYERS[2],
              activation='tanh', recurrent_activation='hard_sigmoid',
              kernel_regularizer=l2(LAMBD), recurrent_regularizer=l2(LAMBD),
              dropout=DP, recurrent_dropout=RDP,
              return_sequences=False, return_state=False,
              stateful=False, unroll=False
             ))
model.add(BatchNormalization())
# model.add(Dense(units=LAYERS[3], activation='sigmoid'))

model.add(Dense(units=LAYERS[3], activation='sigmoid'))
#model.add(BatchNormalization())
model.add(Dense(units=LAYERS[4], activation='relu'))
#model.add(BatchNormalization())
model.add(Dense(units=LAYERS[5], activation='sigmoid'))


opt = SGD(lr=0.5, momentum=0.2)


model.compile(loss='binary_crossentropy',
              metrics=['accuracy'],
              optimizer=opt)
History = model.fit(X_train, y_train,
                    epochs=EPOCH,
                    batch_size=BATCH,
                    validation_split=0.0,
                    validation_data=(X_test[:M_TEST], y_test[:M_TEST]),shuffle=False,verbose=1)
# Evaluate the model:
train_loss, train_acc = model.evaluate(X_train, y_train,
                                       batch_size=M_TRAIN, verbose=0)
test_loss, test_acc = model.evaluate(X_test[:M_TEST], y_test[:M_TEST],
                                     batch_size=M_TEST, verbose=0)
print('-'*65)
print(f'train accuracy = {round(train_acc * 100, 4)}%')
print(f'test accuracy = {round(test_acc * 100, 4)}%')
print(f'test error = {round((1 - test_acc) * M_TEST)} out of {M_TEST} examples')

这是一个结果: enter image description here

0 个答案:

没有答案