Keras时间序列到序列的序列根本不适合并且会移位

时间:2018-12-16 03:40:28

标签: tensorflow keras time-series python-3.5

我基于以下代码建立了我的代码:https://weiminwang.blog/2017/09/29/multivariate-time-series-forecast-using-seq2seq-in-tensorflow/#multivariate链接。

这是我编辑过的笔记本的仓库:https://github.com/itisyeetimetoday/Seq_to_Seq

这是我的数据:https://docs.google.com/spreadsheets/d/1B-Z_qjTPX_LppHh0Bv4WWD0KWB3X5RZqQjE_vLKkw8w/edit?usp=sharing

这是我的代码:

from build_model_multi_variate import * 

## Parameters
learning_rate = 0.1
#0.005
lambda_l2_reg = 0.05
## 0.0017
## Network Parameters
# length of input signals
input_seq_len = 15 
# length of output signals
output_seq_len = 20 
# size of LSTM Cell
hidden_dim = 64
#64
# num of input signals
input_dim = 3 
# num of output signals
output_dim = 2 
# num of stacked lstm layers 
num_stacked_layers = 2
#2
# gradient clipping - to avoid gradient exploding
GRADIENT_CLIPPING = 4
#2.5

完整的build_model_multivariate.py在此处:https://github.com/aaxwaz/Multivariate-Time-Series-forecast-using-seq2seq-in-TensorFlow/blob/master/build_model_multi_variate.py

我使用下一个代码进行训练

total_iteractions = 1000
# 100, 200 is best so far, 1000
batch_size = 16
#16, 16 is the best so far
KEEP_RATE = 0.5
train_losses = []
val_losses = []

x = np.linspace(0, 40, 130)
train_data_x = x[:185]

rnn_model = build_graph(feed_previous=False)

saver = tf.train.Saver()

init = tf.global_variables_initializer()
with tf.Session() as sess:

    sess.run(init)

   print("Training losses: ")
    for i in range(total_iteractions):
        batch_input, batch_output = 
generate_train_samples(batch_size=batch_size)

        feed_dict = {rnn_model['enc_inp'][t]: batch_input[:,t] for t in 
range(input_seq_len)}
        feed_dict.update({rnn_model['target_seq'][t]: batch_output[:,t] for t in range(output_seq_len)})
    _, loss_t = sess.run([rnn_model['train_op'], rnn_model['loss']], feed_dict)
        print(loss_t)

    temp_saver = rnn_model['saver']()
    save_path = temp_saver.save(sess, os.path.join('./','multivariate_ts_model0'))

print("Checkpoint saved at: ", save_path)

我加载代码(完整的内容可以在我的github存储库中找到) 然后,我将预测可视化:

test_seq_input= 
np.array(generate_samples_for_input(train_data_x[-15:])).transpose(1,0)
test_seq_output = 
np.array(generate_samples_for_output(train_data_x[-20:])).transpose(1,0)

plt.title("Predicted and true output sequences")
#i1, i2, i3, = plt.plot(range(15), 
np.array(true_input_signals(x[95:110])).transpose(1, 0), 'c:', label = 'true input sequence')
 p1, p2 = plt.plot(range(15, 35), final_preds, 'ro', label = 'predicted outputs')
t1, t2 = plt.plot(range(15, 35), 
 np.array(true_output_signals(x[110:])).transpose(1, 0 ), 'co', label = 'true outputs')

print(final_preds)
print(np.array(true_output_signals(x[110:])).transpose(1, 0 ))

plt.legend(handles = [p1, t1], loc = 'upper left')
plt.show()

但是,我的试穿还很遥远。我尝试增加步骤(导致预测以某种方式改变)。 我尝试更改所有超参数。 这是预测的图片 enter image description here

我需要更改我的代码吗?

0 个答案:

没有答案