以下是一个代码段,给定一个state
,它会根据状态相关的分布(action
)生成一个prob_policy
。然后,根据损失(该损失是选择该操作的可能性的-1倍)来更新图形的权重。在以下示例中,MultivariateNormal的均值(mu
)和协方差(sigma
)都是可训练/学习的。
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
# make the graph
state = tf.placeholder(tf.float32, (1, 2), name="state")
mu = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.squeeze(sigma)
mu = tf.squeeze(mu)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)
action = prob_policy.sample()
picked_action_prob = prob_policy.prob(action)
loss = -tf.log(picked_action_prob)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
# run the optimizer
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
state_input = np.expand_dims([0.,0.],0)
_, action_loss = sess.run([train_op, loss], { state: state_input })
print(action_loss)
但是,当我替换此行
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=sigma)
带有以下行(并注释掉生成sigma层并对其进行挤压的行)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=[1.,1.])
我收到以下错误
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'fully_connected/weights:0' shape=(2, 2) dtype=float32_ref>", "<tf.Variable 'fully_connected/biases:0' shape=(2,) dtype=float32_ref>"] and loss Tensor("Neg:0", shape=(), dtype=float32).
我不明白为什么会这样。难道它还不能针对mu
层中的权重采用渐变吗?为什么使分布的协方差恒定会突然使它不可微?
系统详细信息:
答案 0 :(得分:0)
我必须更改此行
action = prob_policy.sample()
此行
action = tf.stop_gradient(prob_policy.sample())
如果有人能解释为什么学习协方差的权重会使位置的权重因损失而可微分,而使协方差为常数却没有,以及这条线的变化如何影响这一点,我很乐意说明!谢谢!
答案 1 :(得分:0)
有一个问题是由于我们在MVNDiag(以及TransformedDistribution的其他子类)内部进行了一些缓存以实现可逆性。
如果在.sample()之后执行+ 0
(作为一种解决方法),渐变将起作用。
我也建议使用dist.log_prob(..)
而不是tf.log(dist.prob(..))
。更好的数字。
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
# make the graph
state = tf.placeholder(tf.float32, (1, 2), name="state")
mu = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.contrib.layers.fully_connected(
inputs=state,
num_outputs=2,
biases_initializer=tf.ones_initializer)
sigma = tf.squeeze(sigma)
mu = tf.squeeze(mu)
prob_policy = tfp.distributions.MultivariateNormalDiag(loc=mu, scale_diag=[1.,1.])
action = prob_policy.sample() + 0
loss = -prob_policy.log_prob(action)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
# run the optimizer
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
state_input = np.expand_dims([0.,0.],0)
_, action_loss = sess.run([train_op, loss], { state: state_input })
print(action_loss)