神经网络的结果令人怀疑

时间:2018-12-04 17:59:57

标签: python keras neural-network time-series

我正在研究Keras库,试图预测一个时间序列并获得非常糟糕的结果,我想知道为什么神经网络甚至不能处理一个简单的情况。我的(工程)数据如下所示:

模式非常简单-结果与要素的值完全相同,共有10000行,例如

dataPointIndex,feature,result
0, 1, 1
1, 1, 1
2, 0, 0
3, 1, 1
4, 1, 1
5, 1, 1
6, 1, 1
7, 0, 0
8, 1, 1
9, 0, 0
10, 1, 1
...

我的Keras代码:

TIMESERIES_LENGTH = 10
TIMESERIES_SAMPLING_RATE = 1
TIMESERIES_BATCH_SIZE = 16
TEST_SET_RATIO = 0.2
VALIDATION_SET_RATIO = 0.2

data = pd.read_csv("data/" + csv_path)
x = data.ix[:, 1:2]
y = data.ix[:, 2]

test_set_length = int(round(len(x) * TEST_SET_RATIO))
validation_set_length = int(round(len(x) * VALIDATION_SET_RATIO))
x_train_and_val = x[:-test_set_length]
y_train_and_val = y[:-test_set_length]
x_train = x_train_and_val[:-validation_set_length].values
y_train = y_train_and_val[:-validation_set_length].values
x_val = x_train_and_val[-validation_set_length:].values
y_val = y_train_and_val[-validation_set_length:].values
x_test = x[-test_set_length:].values
y_test = y[-test_set_length:].values

scaler = sklearn.preprocessing.StandardScaler().fit(x_train_and_val)

train_gen = keras.preprocessing.sequence.TimeseriesGenerator(
    x_train,
    y_train,
    length=TIMESERIES_LENGTH,
    sampling_rate=TIMESERIES_SAMPLING_RATE,
    batch_size=TIMESERIES_BATCH_SIZE
)

val_gen = keras.preprocessing.sequence.TimeseriesGenerator(
    x_val,
    y_val,
    length=TIMESERIES_LENGTH,
    sampling_rate=TIMESERIES_SAMPLING_RATE,
    batch_size=TIMESERIES_BATCH_SIZE
)

test_gen = keras.preprocessing.sequence.TimeseriesGenerator(
    x_test,
    y_test,
    length=TIMESERIES_LENGTH,
    sampling_rate=TIMESERIES_SAMPLING_RATE,
    batch_size=TIMESERIES_BATCH_SIZE
)

model = keras.models.Sequential()

model.add(keras.layers.Dense(100, activation='relu', input_shape=(TIMESERIES_LENGTH, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1000, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))

model.compile(
    loss='binary_crossentropy',
    optimizer='adam',
    metrics=['accuracy']
)

history = model.fit_generator(
    train_gen,
    epochs=20,
    verbose=1,
    validation_data=val_gen
)

plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('accuracy/loss')
plt.xlabel('epoch')
plt.legend(['training accuracy', 'validation accuracy', 'training loss', 'validation loss'], loc='upper left')
plt.show()

结果:

enter image description here

我尝试了LSTM层,但是它们的表现也很差。

知道我在做什么错吗?非常感谢。

2 个答案:

答案 0 :(得分:0)

事实证明,keras.preprocessing.sequence.TimeseriesGenerator期望y(在我的示例中为y_train)与X(在我的示例中为x_train)相比要移位1。

您的输入数据应采用这样的形状:以索引n结尾的X的特定子序列可以预测y中索引n + 1处的值。我最初的错误是它预测索引n的值。

感谢Daniel Möller为我指明了正确的方向。

答案 1 :(得分:-2)

目标数据的平均值是多少?是零吗?根据我的经验,NN的默认配置没有恒定值,可以通过使最后一层具有仿射或线性激活函数来获得该值。