tf.contrib.layer.fully_connected,tf.layers.dense,tf.contrib.slim.fully_connected,tf.keras.layers.Dense

时间:2019-01-16 16:53:20

标签: python tensorflow reinforcement-learning q-learning

我正在尝试针对背景性强盗问题(https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-1-5-contextual-bandits-bff01d1aad9c)实施政策调整。

我正在使用单个完全连接层在tensorflow中定义一个模型来解决此问题。

我正在尝试从tensorflow中使用不同的API,但由于它不受tensorflow支持,因此想避免使用contrib软件包。我对使用keras API感兴趣,因为我已经熟悉功能接口,并且现在将其实现为tf.keras。但是,我似乎只能在使用tf.contrib.slim.fully_connectedtf.contrib.layers.fully_connected(前者称为后者)时才能得到结果。

以下两个代码片段正常工作({one_hot_encoded_state_inputnum_actions都遵守图层的预期张量形状)。

import tensorflow.contrib.slim as slim
action_probability_distribution = slim.fully_connected(
    one_hot_encoded_state_input, \
    num_actions, \     
    biases_initializer=None, \
    activation_fn=tf.nn.sigmoid, \
    weights_initializer=tf.ones_initializer())

from tensorflow.contrib.layers import fully_connected
action_probability_distribution = fully_connected(
    one_hot_encoded_state_input,
    num_actions,\
    biases_initializer=None, \
    activation_fn=tf.nn.sigmoid, \
    weights_initializer=tf.ones_initializer())

另一方面,以下两项均无效:

action_probability_distribution = tf.layers.dense(
    one_hot_encoded_state_input, \
    num_actions, \
    activation=tf.nn.sigmoid, \
    bias_initializer=None, \
    kernel_initializer=tf.ones_initializer())

action_probability_distribution = tf.keras.layers.Dense(
    num_actions, \
    activation='sigmoid', \
    bias_initializer=None, \
    kernel_initializer = 'Ones')(one_hot_encoded_state_input)

最后两种情况使用tensorflow的高级API layerskeras。理想情况下,我想知道我是否使用后两个案例错误地实现了前两个案例,并且我唯一遇到的问题是后两个案例与前两个

为了完整起见,这是运行此代码所需的全部代码(注意:使用了python 3.5.6和tensorflow 1.12.0)。

import tensorflow as tf
import numpy as np
tf.reset_default_graph()

num_states = 3
num_actions = 4
learning_rate = 1e-3

state_input = tf.placeholder(shape=(None,),dtype=tf.int32, name='state_input')
one_hot_encoded_state_input = tf.one_hot(state_input, num_states)

# DOESN'T WORK
action_probability_distribution = tf.keras.layers.Dense(num_actions, activation='sigmoid', bias_initializer=None, kernel_initializer = 'Ones')(one_hot_encoded_state_input)

# WORKS
# import tensorflow.contrib.slim as slim
# action_probability_distribution = slim.fully_connected(one_hot_encoded_state_input,num_actions,\
#     biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())

# WORKS
# from tensorflow.contrib.layers import fully_connected
# action_probability_distribution = fully_connected(one_hot_encoded_state_input,num_actions,\
#     biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())

# DOESN'T WORK
# action_probability_distribution = tf.layers.dense(one_hot_encoded_state_input,num_actions, activation=tf.nn.sigmoid, bias_initializer=None, kernel_initializer=tf.ones_initializer())

action_probability_distribution = tf.squeeze(action_probability_distribution)
action_chosen = tf.argmax(action_probability_distribution)

reward_input = tf.placeholder(shape=(None,), dtype=tf.float32, name='reward_input')
action_input = tf.placeholder(shape=(None,), dtype=tf.int32, name='action_input')
responsible_weight = tf.slice(action_probability_distribution, action_input, [1])
loss = -(tf.log(responsible_weight)*reward_input)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
update = optimizer.minimize(loss)


bandits = np.array([[0.2,0,-0.0,-5],
                    [0.1,-5,1,0.25],
                    [-5,5,5,5]])

assert bandits.shape == (num_states, num_actions)

def get_reward(state, action): # the lower the value of bandits[state][action], the higher the likelihood of reward
    if np.random.randn() > bandits[state][action]:
        return 1
    return -1

max_episodes = 10000
epsilon = 0.1

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    rewards = np.zeros(num_states)
    for episode in range(max_episodes):
        state = np.random.randint(0,num_states)
        action = sess.run(action_chosen, feed_dict={state_input:[state]})
        if np.random.rand(1) < epsilon:
            action = np.random.randint(0, num_actions)

        reward = get_reward(state, action)
        sess.run([update, action_probability_distribution, loss], feed_dict = {reward_input: [reward], action_input: [action], state_input: [state]})

        rewards[state] += reward

        if episode%500 == 0:
            print(rewards)

使用注释为# THIS WORKS的块时,代理会在所有三个状态中学习并最大化奖励。另一方面,那些评论过# THIS DOESN'T WORK#的人不会学习,通常会很快收敛到选择一个动作。例如,工作行为应打印一个reward列表,该列表为正数,并且数量不断增加(每个州都有良好的累积奖励)。 不工作行为看起来像一个reward列表,其中只有一个动作会增加累积奖励,通常会牺牲另一个(负累积奖励)。

1 个答案:

答案 0 :(得分:1)

对于任何遇到此问题的人,尤其是由于tensorflow有许多实现API的人,其区别在于偏向于初始化和默认值。对于tf.contribtf.slim,使用biases_initializer = None表示不使用偏差。使用tf.layerstf.keras复制此文件需要use_bias=False