Pytorch Double DQN无法正常工作

时间:2018-08-15 12:19:19

标签: python pytorch reinforcement-learning

我正在尝试为cartpole-v0建立一个双dqn网络,但是该网络似乎没有按预期工作,并停滞在8-9左右的奖励水平。我在做什么错了?

学习阶段的每个步骤:

def make_step(model, target_model, optimizer, criterion, observation, action, reward, next_observation):
    inp_obv = torch.Tensor(observation)
    q = model(inp_obv)
    q_argmax = torch.argmax(q.data)
    q = q[action]

    inp_next_obv = torch.Tensor(next_observation)
    q_next = target_model(inp_next_obv)
    q_a_next = q_next[q_argmax]

    #LHS of the double DQN equation
    obv_reward = q

    #RHS of the double DQN equation
    target_reward = torch.Tensor([reward]) + GAMMA*q_a_next.detach()

    #Backprop
    loss = criterion(obv_reward, target_reward) #MSELoss
    loss.backward()

代码包装make_step:

optimizer.zero_grad() #RMSprop on net
if e%2 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()

我在做什么错?谢谢。

1 个答案:

答案 0 :(得分:0)

提高目标网络更新频率即可解决问题。

optimizer.zero_grad() #RMSprop on net
if e % 100 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()