Tensorflow - 图表是如何执行的?

时间:2017-07-28 22:14:36

标签: machine-learning tensorflow

我正在尝试在权重变化时获得激活函数的输出。当权重改变时,我希望激活函数也能改变。

我只是在激活之前改变权重,我得到了激活值的变化。

import tensorflow as tf
def sigmoid(x, derivative = False):
    if derivative == True:
        return (1.0/(1+tf.exp(-x))) * (1.0 - (1.0/(1+tf.exp(-x))))
    return 1.0/(1+tf.exp(-x))
def dummy(x):
    weights['h0'] = tf.assign(weights['h0'], tf.add(weights['h0'], 0.1))
    res = tf.add(weights['h0'], x)
    res = sigmoid(res)
    return res

# build computational graph
a = tf.placeholder('float', None)
d = dummy(a)
weights = {
    'h0': tf.Variable(tf.random_normal([1]))
}
# initialize variables
init = tf.global_variables_initializer()

# create session and run the graph
with tf.Session() as sess:
    sess.run(init)
    for i in range(10):
        print (sess.run(d, feed_dict={a: [2]}))
# close session
sess.close()

但是当我尝试在激活后更改权重(例如在backprop中)时,我每次都会获得相同的激活。任何人都可以向我解释发生了什么以及我可以做些什么来在每次迭代后让激活发生变化?

import tensorflow as tf
def sigmoid(x, derivative = False):
    if derivative == True:
        return (1.0/(1+tf.exp(-x))) * (1.0 - (1.0/(1+tf.exp(-x))))
    return 1.0/(1+tf.exp(-x))
def dummy(x):
    res = tf.add(weights['h0'], x)
    res = sigmoid(res)
    weights['h0'] = tf.assign(weights['h0'], tf.add(weights['h0'], 0.1))
    return res

# build computational graph
a = tf.placeholder('float', None)
d = dummy(a)
weights = {
    'h0': tf.Variable(tf.random_normal([1]))
}
# initialize variables
init = tf.global_variables_initializer()

# create session and run the graph
with tf.Session() as sess:
    sess.run(init)
    for i in range(10):
        print (sess.run(d, feed_dict={a: [2]}))
# close session
sess.close()

编辑:

好像没有运行整个图表?我可以这样做:

with tf.Session() as sess:
    sess.run(init)
    for i in range(10):
        sess.run(weights['h0'])
        print (sess.run(d, feed_dict={a: [2]}))

我在哪里运行权重,它给了我不同的值。这是对的吗?

1 个答案:

答案 0 :(得分:2)

这条线没有按照你的想法行事:

    print (sess.run(d, feed_dict={a: [2]}))

您需要调用sess.run()并传入一个训练操作,通常是优化程序的minimize()函数。

以下是一些示例用法。

来自超级简单的Tensorflow MNIST example

  # Define loss and optimizer
  y_ = tf.placeholder(tf.float32, [None, 10])
  cross_entropy = tf.reduce_mean(
      tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
  train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
  ...

  for _ in range(1000):
      ...
      sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

来自TensorFlow multi-layer NN example

  cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\
                      logits=pred, labels=y))
  optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
  ...

  for i in range(total_batch):
      ...
      _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})

一般模式是:

  1. 定义成本函数J.
  2. 将成本变量J添加到优化程序
  3. 使用优化器变量作为参数调用sess.run()
  4. 如果你想编写自己的优化器,那么你需要采取不同的方法。编写自己的成本函数是典型的,但编写自己的优化器却不是。您应该查看AdamOptimizerGradientDescentOptimizer的代码以获取洞察力。