编辑:FrozenLake-v0
似乎也是如此。请注意,我对简单的Q学习不感兴趣,因为我希望看到适用于连续观察空间的解决方案。
我最近创建了banana_gym
OpenAI环境。方案如下:
你有香蕉。它必须在2天内出售,因为它在第3天会很糟糕。您可以选择价格x,但香蕉只会以
的概率出售奖励为x - 1.如果香蕉在第三天没有售出,奖励为-1。 (直觉:你为香蕉付了1欧元)。因此,环境是非确定性的(随机)。
操作:您可以将价格设置为{0.00,0.10,0.20,...,2.00}
观察:剩余时间(source)
我计算了最优政策:
Opt at step 1: price 1.50 has value -0.26 (chance: 0.28)
Opt at step 2: price 1.10 has value -0.55 (chance: 0.41)
这也与我的直觉相符:首先尝试以更高的价格出售香蕉,因为如果你不卖它,你知道你还有另一种尝试。然后将价格降低到0.00以上。
我很确定这个是正确的,但为了完整起见
#!/usr/bin/env python
"""Calculate the optimal banana pricing policy."""
import math
import numpy as np
def main(total_time_steps, price_not_sold, chance_to_sell):
"""
Compare the optimal policy to a given policy.
Parameters
----------
total_time_steps : int
How often the agent may offer the banana
price_not_sold : float
How much do we have to pay if we don't sell until
total_time_steps is over?
chance_to_sell : function
A function that takes the price as an input and outputs the
probabilty that a banana will be sold.
"""
r = get_optimal_policy(total_time_steps,
price_not_sold,
chance_to_sell)
enum_obj = enumerate(zip(r['optimal_prices'], r['values']), start=1)
for i, (price, value) in enum_obj:
print("Opt at step {:>2}: price {:>4.2f} has value {:>4.2f} "
"(chance: {:>4.2f})"
.format(i, price, value, chance_to_sell(price)))
def get_optimal_policy(total_time_steps,
price_not_sold,
chance_to_sell=None):
"""
Get the optimal policy for the Banana environment.
This means for each time step, calculate what is the smartest price
to set.
Parameters
----------
total_time_steps : int
price_not_sold : float
chance_to_sell : function, optional
Returns
-------
results : dict
'optimal_prices' : List of best prices to set at a given time
'values' : values of the value function at a given step with the
optimal policy
"""
if chance_to_sell is None:
chance_to_sell = get_chance
values = [None for i in range(total_time_steps + 1)]
optimal_prices = [None for i in range(total_time_steps)]
# punishment if a banana is not sold
values[total_time_steps] = (price_not_sold - 1)
for i in range(total_time_steps - 1, -1, -1):
opt_price = None
opt_price_value = None
for price in np.arange(0.0, 2.01, 0.10):
p_t = chance_to_sell(price)
reward_sold = (price - 1)
value = p_t * reward_sold + (1 - p_t) * values[i + 1]
if (opt_price_value is None) or (opt_price_value < value):
opt_price_value = value
opt_price = price
values[i] = opt_price_value
optimal_prices[i] = opt_price
return {'optimal_prices': optimal_prices,
'values': values}
def get_chance(x):
"""
Get probability that a banana will be sold at a given price x.
Parameters
----------
x : float
Returns
-------
chance_to_sell : float
"""
return (1 + math.exp(1)) / (1. + math.exp(x + 1))
if __name__ == '__main__':
total_time_steps = 2
main(total_time_steps=total_time_steps,
price_not_sold=0.0,
chance_to_sell=get_chance)
以下DQN代理(使用Keras-RL实施)适用于CartPole-v0
环境,但了解政策
1: Take action 19 (price= 1.90)
0: Take action 14 (price= 1.40)
香蕉环境。它朝着正确的方向发展,但它始终如一地了解战略并且不是最优战略:
为什么DQN代理没有学习最优策略?
执行:
$ python dqn.py --env Banana-v0 --steps 50000
dqn.py
的代码:
#!/usr/bin/env python
import numpy as np
import gym
import gym_banana
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import LinearAnnealedPolicy, EpsGreedyQPolicy
from rl.memory import EpisodeParameterMemory
def main(env_name, nb_steps):
# Get the environment and extract the number of actions.
env = gym.make(env_name)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n
input_shape = (1,) + env.observation_space.shape
model = create_nn_model(input_shape, nb_actions)
# Finally, we configure and compile our agent.
memory = EpisodeParameterMemory(limit=2000, window_length=1)
policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr='eps', value_max=1.,
value_min=.1, value_test=.05,
nb_steps=1000000)
agent = DQNAgent(model=model, nb_actions=nb_actions, policy=policy,
memory=memory, nb_steps_warmup=50000,
gamma=.99, target_model_update=10000,
train_interval=4, delta_clip=1.)
agent.compile(Adam(lr=.00025), metrics=['mae'])
agent.fit(env, nb_steps=nb_steps, visualize=False, verbose=1)
# Get the learned policy and print it
policy = get_policy(agent, env)
for remaining_time, action in sorted(policy.items(), reverse=True):
print("{:>2}: Take action {:>2} (price={:>5.2f})"
.format(remaining_time, action, 2 / 20. * action))
def create_nn_model(input_shape, nb_actions):
"""
Create a neural network model which maps the input to actions.
Parameters
----------
input_shape : tuple of int
nb_actoins : int
Returns
-------
model : keras Model object
"""
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(32, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(nb_actions, activation='linear')) # important to be linear
print(model.summary())
return model
def get_policy(agent, env):
policy = {}
for x_in in range(env.TOTAL_TIME_STEPS):
action = agent.forward(np.array([x_in]))
policy[x_in] = action
return policy
def get_parser():
"""Get parser object for script xy.py."""
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
parser = ArgumentParser(description=__doc__,
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument("--env",
dest="environment",
help="OpenAI Gym environment",
metavar="ENVIRONMENT",
default="CartPole-v0")
parser.add_argument("--steps",
dest="steps",
default=10000,
type=int,
help="how steps are trained?")
return parser
if __name__ == "__main__":
args = get_parser().parse_args()
main(args.environment, args.steps)
答案 0 :(得分:1)
如果我正确地解释您的代码,我认为您正在使用50K培训步骤:
$ python dqn.py --env Banana-v0 --steps 50000
但是通过在DQNAgent构造函数中添加以下内容,还有50K步骤的预热时间:
nb_steps_warmup=50000
我相信这意味着你实际上根本没有接受任何训练,因为预热期仅用于收集重播缓冲区的经验,这是正确的吗?如果是这样,解决方案可能就像减少预热步骤的数量或增加训练步骤的数量一样简单。
为了将来参考(或者如果我在解释上面的代码时出错),我建议总是创建一个学习曲线图(y轴上的剧集奖励,x轴上的训练步骤)。这对于了解正在发生的事情总是有用的,并且可以帮助您专注于调试代码的重要部分。如果奖励根本没有增加,你知道它根本无论出于什么原因都没有学习。如果他们确实增加了一段时间,但随后高原,你可以尝试降低学习率。如果它们确实增加并且一直持续增加直到结束,你知道它可能还没有收敛,你可以尝试增加训练步数或提高学习率。