TensorFlow - 简单前馈NN未训练

时间:2017-07-05 03:48:55

标签: python-3.x machine-learning tensorflow

我是TensorFlow的新手,刚刚构建了我的第一个非常小的网络!我的代码运行,但它始终具有相同的准确性;它不会随着训练而改变。我的数据有15个功能和6个类。也许我会添加更多功能,如果这使它更容易和更好。简而言之,我的问题是:

调试TensorFlow代码的一般过程是什么?

我的网络架构是随意确定的,所以也许我应该更改每层神经元的数量,而不是完全确定。

sess1 = tf.Session()

num_predictors = len(training_predictors_tf.columns)
num_classes = len(training_classes_tf.columns)

feature_data = tf.placeholder(tf.float32, [None, num_predictors])
actual_classes = tf.placeholder(tf.float32, [None, num_classes])

weights1 = tf.Variable(tf.truncated_normal([num_predictors, 50], stddev=0.0001))
biases1 = tf.Variable(tf.ones([50]))

weights2 = tf.Variable(tf.truncated_normal([50, 45], stddev=0.0001))
biases2 = tf.Variable(tf.ones([45]))

weights3 = tf.Variable(tf.truncated_normal([45, 25], stddev=0.0001))
biases3 = tf.Variable(tf.ones([25]))

weights4 = tf.Variable(tf.truncated_normal([25, num_classes], stddev=0.0001))
biases4 = tf.Variable(tf.ones([num_classes]))

hidden_layer_1 = tf.nn.relu(tf.matmul(feature_data, weights1) + biases1)
hidden_layer_2 = tf.nn.relu(tf.matmul(hidden_layer_1, weights2) + biases2)
hidden_layer_3 = tf.nn.relu(tf.matmul(hidden_layer_2, weights3) + biases3)

out = tf.matmul(hidden_layer_3, weights4) + biases4

model = tf.nn.softmax_cross_entropy_with_logits(labels=actual_classes, logits=out)

# cost = -tf.reduce_sum(actual_classes*tf.log(model))


cross_entropy = tf.reduce_mean( model)

train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)


# train_step = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cross_entropy)

sess1.run(tf.global_variables_initializer())

correct_prediction = tf.equal(tf.argmax(out, 1), tf.argmax(actual_classes, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

for i in range(1, 30001):
    sess1.run(
        train_step, 
        feed_dict={
            feature_data: training_predictors_tf.values, 
            actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), num_classes)
        }
    )
    if i%5000 == 0:
        print(i, sess1.run(
            accuracy,
            feed_dict={
                feature_data: training_predictors_tf.values, 
                actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), num_classes)
            }
        ))

这是我的输出:

5000 0.3627
10000 0.3627
15000 0.3627
20000 0.3627
25000 0.3627
30000 0.3627

编辑:我按照here解释我的数据,范围为[-5; 0],但它仍然不能更好地训练网络:(

未缩放数据的片段(单热编码前6列):

2017-06-27  0   0   0   1   0   0   20120.0 20080.0 20070.0 20090.0 ... 20050.0 20160.0 20130.0 20160.0 20040.0 20040.0 20040.0 31753.0 36927.0 41516.0
2017-06-28  0   0   1   0   0   0   20150.0 20120.0 20080.0 20150.0 ... 20060.0 20220.0 20160.0 20130.0 20130.0 20040.0 20040.0 39635.0 31753.0 36927.0
2017-06-29  0   0   0   1   0   0   20140.0 20150.0 20120.0 20140.0 ... 20090.0 20220.0 20220.0 20160.0 20100.0 20130.0 20040.0 50438.0 39635.0 31753.0
2017-06-30  0   1   0   0   0   0   20210.0 20140.0 20150.0 20130.0 ... 20150.0 20270.0 20220.0 20220.0 20050.0 20100.0 20130.0 58983.0 50438.0 39635.0
2017-07-03  0   0   0   1   0   0   20020.0 20210.0 20140.0 20210.0 ... 20140.0 20250.0 20270.0 20220.0 19850.0 20050.0 20100.0 88140.0 58983.0 50438.0

1 个答案:

答案 0 :(得分:0)

调试网络并改进它是两回事。为了改进它,一旦选择了一种分类器(例如神经网络),就应该使用训练和验证精度,并在这两者的函数中调整超参数。例如,请参阅Practical methodology of Goodfellow and alii's book一章,了解如何调整超参数(它有点长,但纯金!)。

关于调试,这是更难的部分。您通常每隔一段时间打印一些“关键张量”值即可。你在某个地方显然有一个错误,或者你的准确性在训练期间至少会有所改变。造成这种情况的一个常见问题是爆炸梯度,导致Nan在训练中很早出现(有时在陌生的地方无限甚至0),这基本上会阻止你网络中的任何更新。我建议打印你的损失,也许是渐变的标准,应该告诉你这是不是问题。如果是这样,快速和肮脏的解决方案是在开始时使用较小的LR然后再增加它。真正的解决方案是使用渐变剪辑。

如何打印多个张量值的示例:

if i%5000 == 0:
    acc_val, loss_val, predictions = sess1.run(
        [accuracy, cross_entropy, tf.argmax(out, 1)],
        feed_dict={
            feature_data: training_predictors_tf.values, 
            actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), num_classes)
        })
    print(i, acc_val, loss_val) # You can also print predictions, but it will be very big so it'll be harder to see the others. Doing it would allow you to check if, for instance, the model always predicts class 0...