我正在尝试构建LSTM自动编码器以查找信号集中的异常。除了在曲线的开始处,自动编码器似乎运行良好。在开始时,所有重建的曲线都从零开始。实际上,在我将其标准化时,这是每条曲线的平均值。看这张图片
An example of original vs reconstructed curve
它发生在所有曲线上。训练曲线看起来像这样
我在这里想念什么?
使用Keras构建自动编码器的方法如下
input_dim = 1
latent_dim = 50
array_length = lstm_df.shape[1]
inputs = Input(shape=(array_length, input_dim))
# encoding
encoded = LSTM(latent_dim, return_sequences=True )(inputs)
decoded = LSTM(input_dim, return_sequences=True, activation='linear'(encoded)
# I also tried the default activation (tanh) here
sequence_autoencoder = Model(inputs, decoded)
sequence_autoencoder.compile(optimizer='rmsprop', loss='mean_squared_error')
# I also tried 'adam' optimizer
# And trained as follows
epochs = 80
history = sequence_autoencoder.fit(lstm_df.values.reshape(lstm_df.shape[0], array_length, input_dim),
lstm_df.values.reshape(lstm_df.shape[0], array_length, input_dim),
verbose=True,
epochs=epochs,
batch_size=32,
shuffle=True)
# Prediction
prediction = sequence_autoencoder.predict(test.values.reshape(test.shape[0], array_length, input_dim))
mse = np.power(test.values - prediction.reshape(test.shape), 2).mean(1)
error_df = pd.DataFrame({'reconstruction error': -mse.ravel()}, index=test.index)
# I am inverting the mse to stick to the convention, the smaller the more outlier
outliers_fraction = .2
no_of_anomalies = int(len(error_df) * outliers_fraction)
anomolus_ids = error_df.sort_values('reconstruction error').head(no_of_anomalies).index
火车和测试数据从这里下载 https://github.com/h2oai/h2o-2/tree/master/smalldata/anomaly
ecg_discord_train.csv和ecg_discord_test.csv