TypeError:+不支持的操作数类型:“ float”和“ instancemethod”

时间:2019-08-19 17:15:33

标签: python typeerror

这部分是动作转换概率

def _calculate_transition_prob(self, current, delta):
        new_position = np.array(current) + np.array(delta)
        new_position =self._limit_coordinates(new_position).astype(int)
        new_state = np.ravel_multi_index(tuple(new_position), self.shape)
        reward = self.reward
        is_done = self._cliff[tuple(new_position)] or (tuple(new_position) == (4,11))
        return [(1.0, new_state, reward, is_done)]

这部分我想使用奖励函数作为参数

def reward(reward, self):
    self.reward = -100.0 if self._cliff[tuple(new_position)] else -1.0
    return reward

这部分是q学习(RL)算法

def q_learning(env, num_episodes, discount_factor=1.0, alpha=0.5, epsilon=0.1):

    Q = defaultdict(lambda: np.zeros(env.action_space.n))

    episode_lengths = np.zeros(num_episodes)
    episode_rewards = np.zeros(num_episodes)

    policy = epsilon_greedy_policy(Q, epsilon, env.action_space.n)

    for i_episode in range(num_episodes):
        state = env.reset()

        for t in itertools.count():
            action_probs = policy(state)
            action = np.random.choice(np.arange(len(action_probs)), p = action_probs)
            next_state, reward, done, _ = env.step(action)

            episode_rewards[i_episode] += reward
            episode_lengths[i_episode] = t

1 个答案:

答案 0 :(得分:1)

看看该语句在做什么:您尝试将功能对象 reward添加到左侧。向某个对象添加功能对象是什么意思?您需要更清楚地编写代码,以免将本地reward变量与可见的reward()函数混淆。

我怀疑您需要的是函数的返回值-您需要调用它。同样,我建议您为变量和函数分别命名。