我建立了一个递归神经网络来预测时间序列数据。我似乎得到了合理的结果,它将按预期的那样在三个级别上以33%的准确度开始,并且在一定程度上也会有所改善。我想测试网络只是为了确保它确实在工作,所以我创建了一个基本的输入/输出,如下所示:
in1 in2 in3 out
1 2 3 0
5 6 7 1
9 10 11 2
1 2 3 0
5 6 7 1
9 10 11 2
我在csv中通过一百万行复制了此模式。我本来以为神经网络可以轻松识别这种模式,因为它总是一样的。我尝试将倾斜率设置为.1,.01,.001,.0001,但始终保持在33%左右(对于.0001 lr为34%)。我将在下面发布我的代码,神经网络是否应该能够轻松识别这一点,我的设置是否存在某些问题?
import os
import pandas as pd
from sklearn import preprocessing
from collections import deque
import random
import numpy as np
import time
import random
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, CuDNNLSTM, BatchNormalization
from tensorflow.keras.models import load_model
df = pd.DataFrame()
df = pd.read_csv('/home/drew/Desktop/numbers.csv')
times = sorted(df.index.values)
last_5pct = times[-int(0.1*len(times))]
validation_df = df[(df.index >= last_5pct)]
main_df = df[(df.index < last_5pct)]
epochs = 10
batch_size = 64
train_x = main_df[['in1','in2','in3']]
validation_x = validation_df[['in1','in2','in3']]
train_y = main_df[['out']]
validation_y = validation_df[['out']]
train_x = train_x.values
validation_x = validation_x.values
train_x = train_x.reshape(train_x.shape[0],1,3)
validation_x = validation_x.reshape(validation_x.shape[0],1,3)
model = Sequential()
model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(32, activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(3, activation="softmax"))
opt = tf.keras.optimizers.Adam(lr=0.0001, decay=1e-6)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit(train_x, train_y,
batch_size=batch_size,
epochs=epochs,
validation_data=(validation_x, validation_y))
结果:
Train on 943718 samples, validate on 104857 samples
Epoch 1/10
943718/943718 [===] - 125s 132us/sample - loss: 0.0040 - acc: 0.3436 - val_loss: 2.3842e-07 - val_acc: 0.3439
Epoch 2/10
943718/943718 [===] - 111s 118us/sample - loss: 2.1557e-06 - acc: 0.3437 - val_loss: 2.3842e-07 - val_acc: 0.3435
...
Train on 943718 samples, validate on 104857 samples
Epoch 1/10
943718/943718 [==============================] - 125s 132us/sample - loss: 0.0040 - acc: 0.3436 - val_loss: 2.3842e-07 - val_acc: 0.3439
Epoch 2/10
943718/943718 [==============================] - 111s 118us/sample - loss: 2.1557e-06 - acc: 0.3437 - val_loss: 2.3842e-07 - val_acc: 0.3435
Epoch 6/10
719104/943718 [=====================>........] - ETA: 25s - loss: 2.4936e-07 - acc: 0.3436
答案 0 :(得分:0)
似乎您没有预测时间序列结果,因此不需要在第二个LSTM
层之后返回序列。
如果将第二个LSTM
更改为:
model.add(LSTM(128, input_shape=(train_x.shape[1:])))
然后,您应该获得很高的准确性。