具有张量流衰退的一致前向/后向传递

时间:2017-07-31 15:45:27

标签: tensorflow backpropagation reinforcement-learning regularized

对于强化学习,通常为剧集的每个步骤应用神经网络的前向传递以计算策略。然后可以使用反向传播计算参数梯度。我网络的简化实现如下:

class AC_Network(object):

    def __init__(self, s_size, a_size, scope, trainer, parameters_net):
        with tf.variable_scope(scope):
            self.is_training = tf.placeholder(shape=[], dtype=tf.bool)
            self.inputs = tf.placeholder(shape=[None, s_size], dtype=tf.float32)
            # (...)
            layer = slim.fully_connected(self.inputs, 
                                         layer_size,
                                         activation_fn=tf.nn.relu,
                                         biases_initializer=None)
            layer = tf.contrib.layers.dropout(inputs=layer, keep_prob=parameters_net["dropout_keep_prob"], 
                                              is_training=self.is_training)

            self.policy = slim.fully_connected(layer, a_size,
                                               activation_fn=tf.nn.softmax,
                                               biases_initializer=None)

            self.actions = tf.placeholder(shape=[None], dtype=tf.int32)
            self.advantages = tf.placeholder(shape=[None], dtype=tf.float32)
            actions_onehot = tf.one_hot(self.actions, a_size, dtype=tf.float32)
            responsible_outputs = tf.reduce_sum(self.policy * actions_onehot, [1])
            self.policy_loss = - policy_loss_multiplier * tf.reduce_mean(tf.log(responsible_outputs) * self.advantages)

             local_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope)
             self.gradients = tf.gradients(self.policy_loss, local_vars)

现在在训练期间,我将通过连续的前进传球(再次简化版)来推出这一集:

s = self.local_env.reset() # list of input variables for the first step
while done == False:
    a_dist = sess.run([self.policy],
                      feed_dict = {self.local_AC.inputs: [s],
                                   self.is_training: True})
    a = np.argmax(a_dist)
    s, r, done, extra_stat = self.local_env.step(a)
    # (...)

最后我将通过向后传递来计算渐变:

p_l, grad = sess.run([self.policy_loss,
                      self.gradients],
                      feed_dict={self.inputs: np.vstack(comb_observations),
                                 self.is_training: True,
                                 self.actions: np.hstack(comb_actions),})

(请注意,我可能在上面试图尽可能多地移除与问题无关的原始代码时犯了错误)

最后问题是:有没有办法确保对sess.run()的所有连续调用都会生成相同的dropout结构?理想情况下,我希望在每集中都有完全相同的辍学结构,并且只在剧集之间进行更改。事情似乎很有效,但我仍然怀疑。

0 个答案:

没有答案