我已经与Google的TensorFlow图书馆建立了一个MLP。网络正在运行但不知何故它拒绝正确学习。无论输入实际是什么,它总是收敛到接近1.0的输出。
可以看到完整代码 here。
有什么想法吗?
输入和输出(批量大小4)如下:
input_data = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]] # XOR input
output_data = [[0.], [1.], [1.], [0.]] # XOR output
n_input = tf.placeholder(tf.float32, shape=[None, 2], name="n_input")
n_output = tf.placeholder(tf.float32, shape=[None, 1], name="n_output")
隐藏图层配置:
# hidden layer's bias neuron
b_hidden = tf.Variable(0.1, name="hidden_bias")
# hidden layer's weight matrix initialized with a uniform distribution
W_hidden = tf.Variable(tf.random_uniform([2, hidden_nodes], -1.0, 1.0), name="hidden_weights")
# calc hidden layer's activation
hidden = tf.sigmoid(tf.matmul(n_input, W_hidden) + b_hidden)
输出图层配置:
W_output = tf.Variable(tf.random_uniform([hidden_nodes, 1], -1.0, 1.0), name="output_weights") # output layer's weight matrix
output = tf.sigmoid(tf.matmul(hidden, W_output)) # calc output layer's activation
我的学习方法如下所示:
loss = tf.reduce_mean(cross_entropy) # mean the cross_entropy
optimizer = tf.train.GradientDescentOptimizer(0.01) # take a gradient descent for optimizing
train = optimizer.minimize(loss) # let the optimizer train
我尝试了交叉熵的两种设置:
cross_entropy = -tf.reduce_sum(n_output * tf.log(output))
和
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(n_output, output)
其中n_output
是output_data
中所述的原始输出,output
是我网络预测/计算的值。
for-loop中的训练(对于n个时期)如下:
cvalues = sess.run([train, loss, W_hidden, b_hidden, W_output],
feed_dict={n_input: input_data, n_output: output_data})
我将结果保存为loss
,W_hidden
的调试主题的cvalues,...
无论我尝试过什么,当我测试我的网络,尝试验证输出时,它总会产生这样的结果:
(...)
step: 2000
loss: 0.0137040186673
b_hidden: 1.3272010088
W_hidden: [[ 0.23195425 0.53248233 -0.21644847 -0.54775208 0.52298909]
[ 0.73933059 0.51440752 -0.08397482 -0.62724304 -0.53347367]]
W_output: [[ 1.65939867]
[ 0.78912479]
[ 1.4831928 ]
[ 1.28612828]
[ 1.12486529]]
(--- finished with 2000 epochs ---)
(Test input for validation:)
input: [0.0, 0.0] | output: [[ 0.99339396]]
input: [0.0, 1.0] | output: [[ 0.99289012]]
input: [1.0, 0.0] | output: [[ 0.99346077]]
input: [1.0, 1.0] | output: [[ 0.99261558]]
所以不是正确学习,但无论输入哪个输入,总是收敛到接近1.0。
答案 0 :(得分:8)
同时在同事的帮助下,我能够修复我的解决方案,并希望将其发布为完整性。我的解决方案在交叉熵和下工作而不改变训练数据。此外,它具有所需的输入形状(1,2)和输出是标量。
它使用AdamOptimizer
,比GradientDescentOptimizer
更快地减少错误 。有关优化程序的更多信息(和问题^^),请参阅this post。
事实上,我的网络在400-800个学习步骤中产生了相当不错的结果。
在2000个学习步骤之后,输出几乎是“完美的”:
step: 2000
loss: 0.00103311243281
input: [0.0, 0.0] | output: [[ 0.00019799]]
input: [0.0, 1.0] | output: [[ 0.99979786]]
input: [1.0, 0.0] | output: [[ 0.99996307]]
input: [1.0, 1.0] | output: [[ 0.00033751]]
import tensorflow as tf
#####################
# preparation stuff #
#####################
# define input and output data
input_data = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]] # XOR input
output_data = [[0.], [1.], [1.], [0.]] # XOR output
# create a placeholder for the input
# None indicates a variable batch size for the input
# one input's dimension is [1, 2] and output's [1, 1]
n_input = tf.placeholder(tf.float32, shape=[None, 2], name="n_input")
n_output = tf.placeholder(tf.float32, shape=[None, 1], name="n_output")
# number of neurons in the hidden layer
hidden_nodes = 5
################
# hidden layer #
################
# hidden layer's bias neuron
b_hidden = tf.Variable(tf.random_normal([hidden_nodes]), name="hidden_bias")
# hidden layer's weight matrix initialized with a uniform distribution
W_hidden = tf.Variable(tf.random_normal([2, hidden_nodes]), name="hidden_weights")
# calc hidden layer's activation
hidden = tf.sigmoid(tf.matmul(n_input, W_hidden) + b_hidden)
################
# output layer #
################
W_output = tf.Variable(tf.random_normal([hidden_nodes, 1]), name="output_weights") # output layer's weight matrix
output = tf.sigmoid(tf.matmul(hidden, W_output)) # calc output layer's activation
############
# learning #
############
cross_entropy = -(n_output * tf.log(output) + (1 - n_output) * tf.log(1 - output))
# cross_entropy = tf.square(n_output - output) # simpler, but also works
loss = tf.reduce_mean(cross_entropy) # mean the cross_entropy
optimizer = tf.train.AdamOptimizer(0.01) # take a gradient descent for optimizing with a "stepsize" of 0.1
train = optimizer.minimize(loss) # let the optimizer train
####################
# initialize graph #
####################
init = tf.initialize_all_variables()
sess = tf.Session() # create the session and therefore the graph
sess.run(init) # initialize all variables
#####################
# train the network #
#####################
for epoch in xrange(0, 2001):
# run the training operation
cvalues = sess.run([train, loss, W_hidden, b_hidden, W_output],
feed_dict={n_input: input_data, n_output: output_data})
# print some debug stuff
if epoch % 200 == 0:
print("")
print("step: {:>3}".format(epoch))
print("loss: {}".format(cvalues[1]))
# print("b_hidden: {}".format(cvalues[3]))
# print("W_hidden: {}".format(cvalues[2]))
# print("W_output: {}".format(cvalues[4]))
print("")
print("input: {} | output: {}".format(input_data[0], sess.run(output, feed_dict={n_input: [input_data[0]]})))
print("input: {} | output: {}".format(input_data[1], sess.run(output, feed_dict={n_input: [input_data[1]]})))
print("input: {} | output: {}".format(input_data[2], sess.run(output, feed_dict={n_input: [input_data[2]]})))
print("input: {} | output: {}".format(input_data[3], sess.run(output, feed_dict={n_input: [input_data[3]]})))
答案 1 :(得分:0)
我无法发表评论,因为我没有足够的声誉,但我对这个答案有些疑问。 $ L_2 $ loss函数是有意义的,因为它基本上是MSE函数,但为什么不进行交叉熵工作?当然适用于其他NN库。其次,为什么世界上会将你的输入空间从$ [0,1]转换为> [-1,1] $因为你添加了偏向量而对特别是有任何影响。
EDIT 这是一个使用交叉熵和从多个源编译的one-hot的解决方案 EDIT ^ 2 将代码更改为使用交叉熵而无需任何额外编码或任何奇怪的目标值移位
import math
import tensorflow as tf
import numpy as np
HIDDEN_NODES = 10
x = tf.placeholder(tf.float32, [None, 2])
W_hidden = tf.Variable(tf.truncated_normal([2, HIDDEN_NODES]))
b_hidden = tf.Variable(tf.zeros([HIDDEN_NODES]))
hidden = tf.nn.relu(tf.matmul(x, W_hidden) + b_hidden)
W_logits = tf.Variable(tf.truncated_normal([HIDDEN_NODES, 1]))
b_logits = tf.Variable(tf.zeros([1]))
logits = tf.add(tf.matmul(hidden, W_logits),b_logits)
y = tf.nn.sigmoid(logits)
y_input = tf.placeholder(tf.float32, [None, 1])
loss = -(y_input * tf.log(y) + (1 - y_input) * tf.log(1 - y))
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[0], [1], [1], [0]])
for i in xrange(2000):
_, loss_val,logitsval = sess.run([train_op, loss,logits], feed_dict={x: xTrain, y_input: yTrain})
if i % 10 == 0:
print "Step:", i, "Current loss:", loss_val,"logits",logitsval
print "---------"
print sess.run(y,feed_dict={x: xTrain})