为什么在TensorFlow的培训过程中变量值没有改变?

时间:2017-11-01 07:08:10

标签: python machine-learning tensorflow deep-learning

m的值始终为[1, 1, 1, 1]?为什么?然而,w&的价值观b正确更改。困惑。

import tensorflow as tf
import numpy
import os
import sys

learnRateT = 0.1
batchMulti = 5000

x = tf.placeholder(tf.float32)
xData = numpy.array([[4, 5, 3, 7], [2, 1, 2, 6], [9, 8, 7, 6], [0, 1, 9, 3], [3, 3, 0, 3], [6, 2, 5, 8], [7, 4, 4, 7], [5.0, 1, 8, 0], [5.0, 1, 1, 0], [2, 1, 3, 1]], dtype=numpy.float32)

rowSize = int(xData[0].size)
rowCount = int(xData.size / rowSize)

yTrain = tf.placeholder(tf.float32)
yTrainData = numpy.array([1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=numpy.float32)

m = tf.Variable(tf.ones([rowSize], dtype=tf.float32), dtype=tf.float32, trainable=True)

m9 = tf.fill(m.shape, 9.0)

mm = tf.mod(m * 10, m9) + 1

y0 = tf.floor(tf.mod(x, mm))

w = tf.Variable(tf.ones([rowSize]), dtype=tf.float32)
b = tf.Variable(0, dtype=tf.float32)

y = tf.reduce_mean(tf.nn.sigmoid(w * y0 + b))

loss = tf.abs(y - tf.reduce_mean(yTrain))

optimizer = tf.train.AdadeltaOptimizer(learnRateT)

train = optimizer.minimize(loss)

init = tf.global_variables_initializer()
sess = tf.Session()

sess.run(init)

totalLossSum = 0.0

for i in range(batchMulti):

    lossSum = 0

    for j in range(rowCount):
        result = sess.run([loss, y, y0, yTrain, x, w, b, train, m, mm], feed_dict={x: xData[j], yTrain: yTrainData[j]})

        lossSum = lossSum + float(result[0])

        if i % 1000 == 0:
            print("i: %d, j: %d, loss: %10.10f, y: %f, yTrain: %f, x: %s, y0: %s, m: %s, mm: %s" % (i, j, float(result[0]), float(result[1]), yTrainData[j], xData[j], result[2], result[8], result[9]))

    if i % 1000 == 0:
        print("avgLoss: %10.10f(%e)" % (lossSum / rowCount, lossSum / rowCount))

print("Calculate result: ------")
result = sess.run([y, w, b, loss], feed_dict={x: [5.0, 1, 6, 0], yTrain: 0})
print(result)

输出类似于:

i: 4000, j: 0, loss: 0.1410247087, y: 0.858975, yTrain: 1.000000, x: [ 4.  5.  3.  7.], y0: [ 0.  1.  1.  1.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 1, loss: 0.7396742105, y: 0.739674, yTrain: 0.000000, x: [ 2.  1.  2.  6.], y0: [ 0.  1.  0.  0.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 2, loss: 0.2074543238, y: 0.792546, yTrain: 1.000000, x: [ 9.  8.  7.  6.], y0: [ 1.  0.  1.  0.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 3, loss: 0.1410146952, y: 0.858985, yTrain: 1.000000, x: [ 0.  1.  9.  3.], y0: [ 0.  1.  1.  1.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 4, loss: 0.8225950599, y: 0.822595, yTrain: 0.000000, x: [ 3.  3.  0.  3.], y0: [ 1.  1.  0.  1.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 5, loss: 0.2410957813, y: 0.758904, yTrain: 1.000000, x: [ 6.  2.  5.  8.], y0: [ 0.  0.  1.  0.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 6, loss: 0.7717698216, y: 0.771770, yTrain: 0.000000, x: [ 7.  4.  4.  7.], y0: [ 1.  0.  0.  1.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 7, loss: 0.7733184099, y: 0.773318, yTrain: 0.000000, x: [ 5.  1.  8.  0.], y0: [ 1.  1.  0.  0.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 8, loss: 0.1566338539, y: 0.843366, yTrain: 1.000000, x: [ 5.  1.  1.  0.], y0: [ 1.  1.  1.  0.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
i: 4000, j: 9, loss: 0.1410219669, y: 0.858978, yTrain: 1.000000, x: [ 2.  1.  3.  1.], y0: [ 0.  1.  1.  1.], m: [ 1.  1.  1.  1.], mm: [ 2.  2.  2.  2.]
avgLoss: 0.4135602832(4.135603e-01)

1 个答案:

答案 0 :(得分:0)

tf.floor的渐变始终为0。请参阅this questionthis GitHub issue。因此,y0mmm之前的节点没有更新,这就是它们一直相同的原因。

顺便说一下,即使它确实如此,tf.mod也没有提供渐变(参见this issue)。