训练后,Tensorflow神经网络始终保证50%

时间:2018-04-06 07:13:24

标签: python numpy tensorflow machine-learning neural-network

我刚刚接受了关于神经网络的教程,我试图将我的知识用于测试。我做了一个简单的XOR逻辑学习网络但由于某种原因它总是返回0.5(50%肯定)。这是我的代码:

import tensorflow as tf
import numpy as np

def random_normal(shape=1):
    return (np.random.random(shape) - 0.5) * 2

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

input_size = 2
hidden_size = 16
output_size = 1

x = tf.placeholder(dtype=tf.float32, name="X")
y = tf.placeholder(dtype=tf.float32, name="Y")

W1 = tf.Variable(random_normal((input_size, hidden_size)), dtype=tf.float32, name="W1")
W2 = tf.Variable(random_normal((hidden_size, output_size)), dtype=tf.float32, name="W2")

b1 = tf.Variable(random_normal(hidden_size), dtype=tf.float32, name="b1")
b2 = tf.Variable(random_normal(output_size), dtype=tf.float32, name="b2")

l1 = tf.sigmoid(tf.add(tf.matmul(x, W1), b1), name="l1")
result = tf.sigmoid(tf.add(tf.matmul(l1, W2), b2), name="l2")

r_squared = tf.square(result - y)
loss = tf.reduce_mean(r_squared)

optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)

hm_epochs = 10000

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for itr in range(hm_epochs):
        sess.run(train, {x: train_x, y: train_y})
        if itr % 100 == 0:
            print("Epoch {} done".format(itr))
    print(sess.run(result, {x: [[1, 0]]}))

很抱歉,如果这是一个糟糕的问题,我是机器学习的新手。

2 个答案:

答案 0 :(得分:0)

您的神经网络实际上是正确的,答案可能会让您大吃一惊。变化...

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

为...

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]]).reshape((4, 2))
train_y = np.array([1, 1, 0, 0]).reshape((4, 1))

您可以检查np.array([1, 1, 0, 0]).shape(4,),而不是(4, 1)。结果,y的形状也变为(4,),因此result - y的形状为(4, 4)!换句话说,损失计算了16个与无关的差异与预测和标签的实际比较。所以我对未来的建议:总是明确指定占位符的形状,以便更容易地找到这些错误。

您可以在我创建的this GitHub gist中找到完整的代码。 还有一句话:最后一个sigmoid让它实际上更难来学习[0, 1]输出。如果删除它,网络会收敛得更快。

答案 1 :(得分:-2)

使用TensorFlow

import tensorflow as tf
import keras
import numpy as np
seed = 128

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

test_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
test_y = np.array([1, 1, 0, 0])

num_classes = 2
y_train_binary = keras.utils.to_categorical(train_y, num_classes)
y_test_binary = keras.utils.to_categorical(test_y, num_classes)

def random_normal(shape=1):
    return (np.random.random(shape) - 0.5) * 2

n_hidden_1 = 16
n_input = train_x.shape[1]
n_classes = y_train_binary.shape[1]

weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}

biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

keep_prob = tf.placeholder("float")

training_epochs = 500
display_step = 100
batch_size = 1

x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])

def multilayer_perceptron(x, weights, biases):
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
    return out_layer

predictions = multilayer_perceptron(x, weights, biases)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(cost)

sess = tf.Session()

sess.run(tf.global_variables_initializer())

for epoch in range(training_epochs):
    avg_cost = 0.0
    total_batch = int(len(train_x) / batch_size)
    x_batches = np.array_split(train_x, total_batch)
    y_batches = np.array_split(y_train_binary, total_batch)
    for i in range(total_batch):
        batch_x, batch_y = x_batches[i], y_batches[i]
        _, c = sess.run([optimizer, cost], 
                        feed_dict={x: batch_x, y: batch_y})
        avg_cost += c / total_batch

    if epoch % display_step == 0:
        print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))

print("Optimization Finished!")
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: test_x, y: y_test_binary}, session=sess))
  

大纪元:0001成本= 3.069790050
大纪元:0101成本= 0.001279908
大纪元:0201   成本= 0.000363608
大纪元:0301成本= 0.000168160
大纪元:0401   成本= 0.000095065
优化完成!
准确度:1.0

test_input = [0, 1]
'Label: ', np.argmax(sess.run(predictions , feed_dict={ x:[test_input]}))
  

('标签:',1)

对于这种简单的情况,您可以使用Keras快速测试并查看数据集是否非常适合神经网络。但是,您需要为网络模拟更多数据以进行充分调整。我不认为梯度下降算法能够使用仅4个实例的反向传播来找到最佳点。

让我们模拟更多数据

n = 1000

X_train = np.zeros((n, 2))
y_train = np.zeros((n,))

X_test = np.zeros((n//3, 2))
y_test = np.zeros((n//3,))

for i in range(n):
    if n%3 == 0:
        a, b = np.random.randint(0,2), np.random.randint(0,2)
        X_test[i, 0], X_test[i, 1] = a, b
        y_test[i] = (a and not b) or (not a and b)
    a, b = np.random.randint(0,2), np.random.randint(0,2)
    X_train[i, 0], X_train[i, 1] = a, b
    y_train[i] = (a and not b) or (not a and b)

num_classes = 2
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)

input_shape = (2,)

现在让我们构建我们的模型

model = Sequential()                 
model.add(Dense(16, activation='relu',input_shape=input_shape))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss='categorical_crossentropy',
                       optimizer='rmsprop',
                       metrics=['acc'])

history=model.fit(X_train,
                  y_train_binary,
                  epochs=10,
                  batch_size=8,
                  validation_data=(X_test, y_test_binary))

这将导致100%的准确性。