将角色网络输出缩放到Keras Rl中的动作空间范围

时间:2018-11-21 00:39:38

标签: python tensorflow keras deep-learning keras-rl

我正在尝试从Keras RL实现DDPG,并具有以下参与者网络。

actor = Sequential()
actor.add(Flatten(input_shape=(1,) + env.observation_space.shape))
actor.add(Dense(16))
actor.add(Activation('relu'))
actor.add(Dense(16))
actor.add(Activation('relu'))
actor.add(Dense(16))
actor.add(Activation('relu'))
actor.add(Dense(nb_actions))
actor.add(Activation('linear'))

但是,我希望将输出缩放到针对我的问题的自定义健身房环境操作空间范围。 env.action_space

https://pemami4911.github.io/blog/2016/08/21/ddpg-rl.html在使用tflearn api的情况下显示

def create_actor_network(self):
        inputs = tflearn.input_data(shape=[None, self.s_dim])
        net = tflearn.fully_connected(inputs, 400)
        net = tflearn.layers.normalization.batch_normalization(net)
        net = tflearn.activations.relu(net)
        net = tflearn.fully_connected(net, 300)
        net = tflearn.layers.normalization.batch_normalization(net)
        net = tflearn.activations.relu(net)
        # Final layer weights are init to Uniform[-3e-3, 3e-3]
        w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
        out = tflearn.fully_connected(
            net, self.a_dim, activation='tanh', weights_init=w_init)
        # Scale output to -action_bound to action_bound
        scaled_out = tf.multiply(out, self.action_bound)
        return inputs, out, scaled_out

根据我的要求缩放输出层的等效命令是什么?

0 个答案:

没有答案