为什么我的人工神经网络没有学习?

时间:2018-07-28 11:09:01

标签: python tensorflow machine-learning neural-network

我正在使用简单的前馈神经网络进行电力负荷预测。以下是我的代码:

...

num_periods = 24
f_horizon = 48  #forecast horizon

...

#RNN designning
tf.reset_default_graph()

inputs = num_periods    #input vector size
hidden = 100    
output = num_periods    #output vector size
learning_rate = 0.01
seed = 128

x = tf.placeholder(tf.float32, [None, inputs])
y = tf.placeholder(tf.float32, [None, output])

weights = {
    'hidden': tf.Variable(tf.random_normal([inputs, hidden], seed=seed)),
    'output': tf.Variable(tf.random_normal([hidden, output], seed=seed))
}

biases = {
    'hidden': tf.Variable(tf.random_normal([1,hidden], seed=seed)),
    'output': tf.Variable(tf.random_normal([1,output], seed=seed))
}

hidden_layer = tf.add(tf.matmul(x, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.relu(hidden_layer)

output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = output_layer, labels = y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

init = tf.initialize_all_variables()           #initialize all the variables
epochs = 1000     #number of iterations or training cycles, includes both the FeedFoward and Backpropogation
mape = []

...

for st in state.values():
        print("State: ", st, end='\n')
        with tf.Session() as sess:
            init.run()
            for ep in range(epochs):
                sess.run([optimizer, cost], feed_dict={x: x_batches[st], y: y_batches[st]})
        print("\n")

这是我在NSW状态下得到的输出: cost1 cost2

我们可以看到,成本随着时代的增长而不断增加。为什么会这样呢?

1 个答案:

答案 0 :(得分:1)

您使用了错误的损耗,因为预测电力负荷听起来像是回归问题,而交叉熵仅用于分类。

应该可以使用均方误差之类的方法。