我的神经网络模型出了什么问题?

时间:2017-12-08 04:58:56

标签: python machine-learning tensorflow deep-learning

我得到了178个元素的数据集,每个元素包含13个特征和1个标签。 标签存储为单热阵列。我的训练数据集由158个元素组成。

以下是我的模型:

x = tf.placeholder(tf.float32, [None,training_data.shape[1]])
y_ = tf.placeholder(tf.float32, [None,training_data_labels.shape[1]])

node_1 = 300
node_2 = 300
node_3 = 300
out_n = 3   

#1
W1 = tf.Variable(tf.random_normal([training_data.shape[1], node_1]))
B1 = tf.Variable(tf.random_normal([node_1]))
y1 = tf.add(tf.matmul(x,W1),B1)
y1 = tf.nn.relu(y1)

#2
W2 = tf.Variable(tf.random_normal([node_1, node_2]))
B2 = tf.Variable(tf.random_normal([node_2]))
y2 = tf.add(tf.matmul(y1,W2),B2)
y2 = tf.nn.relu(y2)

#3
W3 = tf.Variable(tf.random_normal([node_2, node_3]))
B3 = tf.Variable(tf.random_normal([node_3]))
y3 = tf.add(tf.matmul(y2,W3),B3)
y3 = tf.nn.relu(y3)

#output
W4 = tf.Variable(tf.random_normal([node_3, out_n]))
B4 = tf.Variable(tf.random_normal([out_n]))
y4 = tf.add(tf.matmul(y3,W4),B4)
y = tf.nn.softmax(y4)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(200):
        sess.run(optimizer,feed_dict={x:training_data, y_:training_data_labels})

    correct = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
    print('Accuracy:',accuracy.eval({x:eval_data, y_:eval_data_labels}))

但是准确度非常低,我尝试将范围200增加到更高的数字,但它仍然很低。

我可以做些什么来改善结果?

1 个答案:

答案 0 :(得分:2)

问题是您正在使用y4的softmax,然后将其传递给tf.nn.softmax_cross_entropy_with_logits。这个错误很常见,实际上在softmax_cross_entropy_with_logits的文档中有关于它的说明:

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally 
for efficiency. Do not call this op with the output of softmax, as it will produce 
incorrect results.

其余代码看起来很好,所以只需将y4替换为y并删除y = tf.nn.softmax(y4)