我正在使用RNN进行时间序列预测。以下是我的模型的详细信息: 损失函数:均方误差 优化器:亚当优化器 输入数据的范围为0到1,并且输入数据也不包含任何“ nan”值。
问题在于执行模型时。在100-200 epocs之后,MSE会将值显示为“ nan”。 对导致问题的原因有什么看法? 这是我的模型的代码。
n_steps_Begin=n_training_samples
n_features_Begin=train_store_Begin.shape[2] # Number of features to be used. To begin with, using only 'Sales' as Input Feature
n_neurons_Begin=50 # Number of neurons on each Cell
n_outputs_Begin=1 # 1 outout, since only Sales has to be predicted.
learning_rate_Begin=0.001
n_iterations_Begin=10000
tf.reset_default_graph()
with tf.name_scope("TrainingData"):
X_Begin=tf.placeholder(tf.float32, [None, n_steps_Begin, n_features_Begin], name="InputData")
y_Begin=tf.placeholder(tf.float32, [None, n_steps_Begin, n_outputs_Begin], name="OutputData")
with tf.name_scope("RecurrentNeuralNetwork"):
cell_Begin=tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.LSTMCell(num_units=n_neurons_Begin, activation=tf.nn.elu),
output_size=n_outputs_Begin)
outputs_Begin,states_Begin=tf.nn.dynamic_rnn(cell_Begin, X_Begin, dtype=tf.float32)
with tf.name_scope("LossFunction"):
loss_Begin=tf.reduce_mean(tf.square(outputs_Begin-y_Begin))
optimizer_Begin=tf.train.AdamOptimizer(learning_rate=learning_rate_Begin)
training_op_Begin=optimizer_Begin.minimize(loss_Begin)
init=tf.global_variables_initializer()