对于tf.random_uniform
和类似的随机操作,我了解到“随机操作是有状态的,每次对其进行求值时都会创建新的随机值。”,因此,两次调用session.run()
时会得到不同的值:
# Each time we run these ops, different results are generated
sess = tf.Session()
print(sess.run(norm))
print(sess.run(norm))
我的问题是,如果我的图形两次引用随机运算,是否可以确保两个“调用”在单个run()
中看到相同的值?例如
rnd_source = tf.random_normal(...)
x1 = rnd_source + 0.
x2 = rnd_source * 1.
sess.run([x1, x2])
如果不能保证x1
和x2
具有相同的值,是否有一种简单的方法可以将随机值存储在张量(不是tf.Variable)中,以确保随机运算仅被评估一次?如果 保证x1
与x2
的值相同,那么是否有办法强制在一次运行中对随机操作进行重新评估以获得新的随机值值?
答案 0 :(得分:1)
您已经做到了,而没有意识到。只需将值分配给张量,然后使用该值即可:
rnd_source = tf.random_normal((1,))
m = rnd_source
现在,在每次运行时,m
会根据正态分布求出单次绘制,然后从中绘制其他图形:
In [27]: for i in range(10):
...: a, b, c, d, e = sess.run( [m*1, m+0, m+1, m+2, m+3 ] )
...: print(a, b, c, d, e)
[-2.1935725] [-2.1935725] [-1.1935725] [-0.19357252] [0.8064275]
[-0.5607107] [-0.5607107] [0.43928927] [1.4392893] [2.4392893]
[0.17031813] [0.17031813] [1.1703181] [2.1703181] [3.1703181]
[0.05647242] [0.05647242] [1.0564724] [2.0564723] [3.0564723]
[-0.2119268] [-0.2119268] [0.7880732] [1.7880732] [2.7880733]
[-0.07041783] [-0.07041783] [0.9295822] [1.9295821] [2.929582]
[-0.9486307] [-0.9486307] [0.05136931] [1.0513693] [2.0513692]
[1.3629643] [1.3629643] [2.3629642] [3.3629642] [4.362964]
[1.6997207] [1.6997207] [2.6997209] [3.6997209] [4.699721]
[1.480969] [1.480969] [2.480969] [3.480969] [4.480969]
现在,每次您进行训练时,都会从分布中获得一个新值,但是使用m
创建图的其余部分,这将是一致的...
为进一步说明,让我们添加新节点...
In [28]: n = m+0
In [29]: o = m+1
现在,
In [31]: for i in range(10):
...: a, b = sess.run([n, o])
...: print(a, b)
...:
[0.32054538] [1.3205454]
[-0.6587958] [0.34120423]
[-0.8067821] [0.19321787]
[-0.29313084] [0.7068691]
[-1.1867933] [-0.18679333]
[1.4355402] [2.4355402]
[0.45581594] [1.4558159]
[-1.9583491] [-0.9583491]
[-1.2682568] [-0.26825678]
[1.534502] [2.534502]