我正在尝试运行原始策略梯度算法并渲染Open AI环境"CartPole-v1"
。
下面给出了该算法的代码,并且运行良好,没有任何错误。可以在here中找到用于此代码的Jupyer笔记本。
en%pylab inline
import tensorflow as tf
import tensorflow.keras.backend as K
import numpy as np
import gym
from tqdm import trange
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.layers import *
env = gym.make("CartPole-v1")
env.observation_space, env.action_space
x = in1 = Input(env.observation_space.shape)
x = Dense(32)(x)
x = Activation('tanh')(x)
x = Dense(env.action_space.n)(x)
x = Lambda(lambda x: tf.nn.log_softmax(x, axis=-1))(x)
m = Model(in1, x)
def loss(y_true, y_pred):
# y_pred is the log probs of the actions
# y_true is the action mask weighted by sum of rewards
return -tf.reduce_sum(y_true*y_pred, axis=-1)
m.compile(Adam(1e-2), loss)
m.summary()
lll = []
# this is like 5x faster than calling m.predict and picking in numpy
pf = K.function(m.layers[0].input, tf.random.categorical(m.layers[-1].output, 1)[0])
tt = trange(40)
for epoch in tt:
X,Y = [], []
ll = []
while len(X) < 8192:
obs = env.reset()
acts, rews = [], []
while True:
# pick action
#act_dist = np.exp(m.predict_on_batch(obs[None])[0])
#act = np.random.choice(range(env.action_space.n), p=act_dist)
# pick action (fast!)
act = pf(obs[None])[0]
# save this state action pair
X.append(np.copy(obs))
acts.append(act)
# take the action
obs, rew, done, _ = env.step(act)
rews.append(rew)
if done:
for i, act in enumerate(acts):
act_mask = np.zeros((env.action_space.n))
act_mask[act] = np.sum(rews[i:])
Y.append(act_mask)
ll.append(np.sum(rews))
break
loss = m.train_on_batch(np.array(X), np.array(Y))
lll.append((np.mean(ll), loss))
tt.set_description("ep_rew:%7.2f loss:%7.2f" % lll[-1])
tt.refresh()
plot([x[0] for x in lll], label="Mean Episode Reward")
plot([x[1] for x in lll], label="Epoch Loss")
plt.legend()
当我尝试渲染环境时,出现IndexError:
obs = env.reset()
rews = []
while True:
env.render()
pred, act = [x[0] for x in pf(obs[None])]
obs, rew, done, _ = env.step(np.argmax(pred))
rews.append(rew)
time.sleep(0.05)
if done:
break
print("ran %d steps, got %f reward" % (len(rews), np.sum(rews)))
(.0)中的3则为True: 4个env.render() ----> 5个pred,act = [x [0] for pf中的x(obs [None])] 6 obs,rew,完成,_ = env.step(np.argmax(pred)) 7个rews.append(rew)
IndexError:标量变量的索引无效。
我读到,当您尝试为numpy
或numpy.int64
这样的numpy.float64
标量编制索引时,会发生这种情况,但是我不确定错误的根源以及应该如何处理关于解决这个问题。任何帮助或建议,将不胜感激。
答案 0 :(得分:2)
看起来您可能已经更改了pf
的工作方式,但忘记了更新渲染代码。
尝试一下(我还没有测试):
act, = pf(obs[None]) # same as pf(obs[None])[0] but asserts shape
obs, rew, done, _ = env.step(act)
这将像在训练时一样随机选择动作-如果您想要贪婪的动作,则需要更改一些其他内容。