TensorFlow XOR实现,未能实现100%的准确性

时间:2018-06-11 15:13:45

标签: tensorflow neural-network tensor

我是machine learningtensorflow的新手。我试图在张量流中实现XOR门我已经提出了这个代码。

import numpy as np
import tensorflow as tf

tf.reset_default_graph()

learning_rate = 0.01
n_epochs = 1000
n_inputs = 2
n_hidden1 = 2
n_outputs = 2

arr1, target = [[0, 0], [0, 1], [1, 0], [1,1]], [0, 1, 1, 0]

X_data = np.array(arr1).astype(np.float32)
y_data = np.array(target).astype(np.int)


X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")


with tf.name_scope("dnn_tf"):
    hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", activation=tf.nn.relu)
    logits = tf.layers.dense(hidden1, n_outputs, name="outputs")

with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")

with tf.name_scope("train"):
    optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9)
    training_op = optimizer.minimize(loss)

with tf.name_scope("eval"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        if epoch % 100 == 0:
            print("Epoch: ", epoch, " Train Accuracy: ", acc_train)

        sess.run(training_op, feed_dict={X:X_data, y:y_data})    
        acc_train = accuracy.eval(feed_dict={X:X_data, y:y_data})

代码运行正常但我在每次运行中得到不同的输出

运行-1

Epoch:  0  Train Accuracy:  0.75
Epoch:  100  Train Accuracy:  1.0
Epoch:  200  Train Accuracy:  1.0
Epoch:  300  Train Accuracy:  1.0
Epoch:  400  Train Accuracy:  1.0
Epoch:  500  Train Accuracy:  1.0
Epoch:  600  Train Accuracy:  1.0
Epoch:  700  Train Accuracy:  1.0
Epoch:  800  Train Accuracy:  1.0
Epoch:  900  Train Accuracy:  1.0

运行-2

Epoch:  0  Train Accuracy:  1.0
Epoch:  100  Train Accuracy:  0.75
Epoch:  200  Train Accuracy:  0.75
Epoch:  300  Train Accuracy:  0.75
Epoch:  400  Train Accuracy:  0.75
Epoch:  500  Train Accuracy:  0.75
Epoch:  600  Train Accuracy:  0.75
Epoch:  700  Train Accuracy:  0.75
Epoch:  800  Train Accuracy:  0.75
Epoch:  900  Train Accuracy:  0.75

RUN3 -

Epoch:  0  Train Accuracy:  1.0
Epoch:  100  Train Accuracy:  0.5
Epoch:  200  Train Accuracy:  0.5
Epoch:  300  Train Accuracy:  0.5
Epoch:  400  Train Accuracy:  0.5
Epoch:  500  Train Accuracy:  0.5
Epoch:  600  Train Accuracy:  0.5
Epoch:  700  Train Accuracy:  0.5
Epoch:  800  Train Accuracy:  0.5
Epoch:  900  Train Accuracy:  0.5

我无法理解我在这里做错了什么以及为什么我的解决方案没有收敛。

2 个答案:

答案 0 :(得分:1)

理论上,可以通过一个隐藏层来解决XOR,其中两个单元具有ReLU激活,就像在代码中一样。但是,能够代表解决方案的网络与能够学习的网络之间始终存在着至关重要的区别。我认为由于网络规模较小,你会遇到死亡的ReLU"问题,由于不幸的随机初始化,您的隐藏单元中的一个(或两个)不会为任何输入激活。不幸的是,ReLU在零激活时也没有渐变,因此一个永不激活的单元也无法学到任何东西。

增加隐藏单位的数量会使这种情况发生的可能性降低(即你可以有三个死单位而另外两个仍足以解决问题),这可以解释为什么你有五个隐藏单位更成功

答案 1 :(得分:0)

您可能想要查看交互式TensorFlow Playground。他们有一个XOR数据集。您可以使用隐藏图层,大小,激活函数等数量,并可视化分类器使用epohcs数量学习的决策边界。