我似乎对Image
的工作方式有误解。在tensorflow documentation中,给出以下示例:
tf.cond
如果z = tf.multiply(a, b)
result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
为x<y
为True
的示例结果,否则为tf.add(x,z)
在此示例之后,我尝试使用tf.cond构建一个小示例,其结果与文档中提到的内容不符。
在我的示例中,tf.square(y)
,deterministic_action = 4
,random_action = 11
。 chose_random=False
应该是stochastic_action
,而不是4
。
值1是从哪里来的?
1
这是输出:
#!/usr/bin/env python3
import tensorflow as tf
import numpy as np
with tf.Graph().as_default():
with tf.device('/cpu:0'):
stochastic_ph = tf.placeholder(tf.bool, (), name="stochastic")
eps = tf.get_variable("eps", (), initializer=tf.constant_initializer(0))
with tf.variable_scope('test_cond') as sc:
deterministic_action = tf.random_uniform([], minval=0, maxval=15, dtype=tf.int64, seed=0) # 4
random_action = tf.random_uniform([], minval=0, maxval=15, dtype=tf.int64, seed=1) # 11
chose_random = tf.random_uniform([], minval=0, maxval=1, dtype=tf.float32) < eps # False because eps = 0
stochastic_action = tf.cond(chose_random, lambda: random_action, lambda: deterministic_action) # S_action should be 4 but it is 1
#output_action = tf.cond(stochastic_ph, lambda: stochastic_action, lambda: deterministic_action)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init, feed_dict={stochastic_ph: True})
print ("s_ph = ", stochastic_ph)
d_action = sess.run(deterministic_action)
print ("det_action= ", d_action)
r_action = sess.run(random_action)
print ("rand_action= ", r_action)
e = sess.run(eps)
c_action = sess.run(chose_random)
print ("chose_rand= ", c_action)
s_action = sess.run(stochastic_action)
print ("s_action= ", s_action)
#output = sess.run(output_action)
答案 0 :(得分:0)
这是因为您要在新的sess.run中再次进行评估。 由于您正在为deterministic_action生成一个随机数,因此结果是4之后的下一个随机数,即1。 这也是您在最后一步中提取deterministic_action的值时,代码的结果。
修改:
print ("s_ph = ", stochastic_ph)
d_action = sess.run(deterministic_action)
print ("det_action= ", d_action)
r_action = sess.run(random_action)
print ("rand_action= ", r_action)
e = sess.run(eps)
c_action = sess.run(chose_random)
print ("chose_rand= ", c_action)
s_action, d_action = sess.run([stochastic_action, deterministic_action])
print ("s_action= ", s_action)
print ("det_action= ", d_action)
结果:
s_ph = Tensor("stochastic:0", shape=(), dtype=bool, device=/device:CPU:0)
det_action= 4
rand_action= 11
chose_rand= False
s_action= 1
det_action= 1
现在您需要做的就是将所有内容运行在一个sess.run
中。d_action, r_action, e, c_action, s_action = sess.run([deterministic_action, random_action, eps, chose_random, stochastic_action])
print ("det_action= ", d_action)
print ("rand_action= ", r_action)
print ("chose_rand= ", c_action)
print ("s_action= ", s_action)
结果:
s_ph = Tensor("stochastic:0", shape=(), dtype=bool, device=/device:CPU:0)
det_action= 4
rand_action= 11
chose_rand= False
s_action= 4
更新:
我不清楚为什么设置种子时random_uniform会生成不同的值。这是因为代码使用与初始化变量相同的会话对象运行。 使用新的会话对象修改代码,结果如下:
print ("s_ph = ", stochastic_ph)
d_action = sess.run(deterministic_action)
print ("det_action= ", d_action)
sess.close()
sess = tf.Session()
sess.run(init, feed_dict={stochastic_ph: True})
s_action = sess.run(stochastic_action)
print ("s_action= ", s_action)
结果:
s_ph = Tensor("stochastic:0", shape=(), dtype=bool, device=/device:CPU:0)
det_action= 4
s_action= 4