马尔可夫决策过程澄清

时间:2012-12-07 18:23:35

标签: python algorithm markov

我正在为家庭作业执行Value Iteration。它很顺利,但我对某个部分感到困惑,特别是下面指出的行

//(taken from http://aima.cs.berkeley.edu/python/mdp.html)
def value_iteration(mdp, epsilon=0.001):
  "Solving an MDP by value iteration. [Fig. 17.4]"
  U1 = dict([(s, 0) for s in mdp.states])
  R, T, gamma = mdp.R, mdp.T, mdp.gamma
  while True:
      U = U1.copy()
      delta = 0
      for s in mdp.states:
          U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])
                                    for a in mdp.actions(s)])
          delta = max(delta, abs(U1[s] - U[s])) //*****THIS LINE**************//
      if delta < epsilon * (1 - gamma) / gamma:
          return U

我理解这条线的一般含义,但是我是否需要将更新的实用程序与旧版本或更新的最后状态进行比较或者是什么?目前我的工作似乎正在起作用(主要是:P),但我很困惑,因为其他版本的算法如this one有k < - k + 1和∀s| Vk [s] -Vk- 1 [秒] | &LT; θ让我觉得我做错了。

这是我的代码:

grid = [[0,0,0],[0,None,0],[0,0,0],[0,-1,1]]

gamma = .9
epsilon = 0.001 #difference threshold
count = 0
while(True):
   delta = 0
   #walk through the set of states

   i = 0   
   while(i < 4):
       j= 0
       while(j < 3):
          if(grid[i][j] == None or grid[i][j] == 1 or grid[i][j] == -1):
              j = j +1
              continue
          temp =  grid[i][j] 
          grid[i][j] = -0.04 + (gamma * bellman(grid, i, j))
          delta = max(delta, abs(grid[i][j] - temp))

          j = j+1
      i = i+1

  if (delta < (epsilon * (1 - gamma) / gamma)): 
      break

我得到了一个输出:

0.5094143831769762  0.6495863449484525  0.795362242280654    1
0.39850269350488843 None                0.48644045209498593 -1
0.29643305379491625 0.25395638075084487 0.344787810489289    0.12994184490884678

0 个答案:

没有答案