Dyna-Q怎么了? (Dyna-Q与Q学习)

时间:2020-05-14 07:48:43

标签: python reinforcement-learning q-learning

我实现了 Q学习算法,并在OpenAI健身房的 FrozenLake-v0 中使用了该算法。 在10000集中,我在训练过程中获得185总奖励,而在测试过程中获得7333总计奖励。 这样好吗?

我还尝试了 Dyna-Q 算法。但是它提供的性能比Q学习差。 大约在10,000个情节中,通过50个计划步骤,可以在训练过程中获得200笔总奖励,而在测试过程中获得700-900笔总奖励。

为什么会这样?

下面是代码。代码有问题吗?

# Setup
env = gym.make('FrozenLake-v0')

epsilon = 0.9
lr_rate = 0.1
gamma = 0.99
planning_steps = 0

total_episodes = 10000
max_steps = 100

培训和测试():

while t < max_steps:
    action = agent.choose_action(state)  
    state2, reward, done, info = agent.env.step(action)  
    # Removed in testing
    agent.learn(state, state2, reward, action)
    agent.model.add(state, action, state2, reward)
    agent.planning(planning_steps)
    # Till here
    state = state2
def add(self, state, action, state2, reward):
        self.transitions[state, action] = state2
        self.rewards[state, action] = reward

def sample(self, env):
    state, action = 0, 0
    # Random visited state
    if all(np.sum(self.transitions, axis=1)) <= 0:
        state = np.random.randint(env.observation_space.n)
    else:
        state = np.random.choice(np.where(np.sum(self.transitions, axis=1) > 0)[0])

    # Random action in that state
    if all(self.transitions[state]) <= 0:
        action = np.random.randint(env.action_space.n)
    else:    
        action = np.random.choice(np.where(self.transitions[state] > 0)[0])
    return state, action

def step(self, state, action):
    state2 = self.transitions[state, action]
    reward = self.rewards[state, action]
    return state2, reward

def choose_action(self, state):
    if np.random.uniform(0, 1) < epsilon:
        return self.env.action_space.sample()
    else:
        return np.argmax(self.Q[state, :])

def learn(self, state, state2, reward, action):
    # predict = Q[state, action]
    # Q[state, action] = Q[state, action] + lr_rate * (target - predict)
    target = reward + gamma * np.max(self.Q[state2, :])
    self.Q[state, action] = (1 - lr_rate) * self.Q[state, action] + lr_rate * target

def planning(self, n_steps):
    # if len(self.transitions)>planning_steps:
    for i in range(n_steps):
        state, action =  self.model.sample(self.env)
        state2, reward = self.model.step(state, action)
        self.learn(state, state2, reward, action)

2 个答案:

答案 0 :(得分:0)

我猜可能是因为环境是随机的。在随机环境中学习模型可能会导致次优策略。在萨顿巴托(Sutton&Barto)的RLBook中,他们说他们假设了确定性的环境。

答案 1 :(得分:0)

在采取模型步骤后检查下一个状态的规划步骤样本,即 var mediaPlayer = MediaPlayer() mediaPlayer.setAudioStreamType(AudioManager.STREAM_VOICE_CALL); mediaPlayer = MediaPlayer.create(this, R.raw.storm) //mediaPlayer.setDataSource(afd.fileDescriptor,afd.startOffset,afd.length) mediaPlayer.setVolume(1f, 1f) mediaPlayer.seekTo(31900) mediaPlayer.setAudioStreamType(AudioManager.STREAM_VOICE_CALL) mediaPlayer.start()

如果不是,规划可能会从 state2 给出的相同起始状态采取重复步骤。

不过,我可能误解了 self.envself.env 参数的作用