尝试使用两个单独的网络时,如何更新张量(权重值)?

时间:2019-02-09 06:23:56

标签: tensorflow deep-learning reinforcement-learning

我一直在尝试使用RL为二十一点制作AI。现在,我正在尝试制作两个单独的网络,这是DQN的一种方式。我已经在网上搜索并找到了某种方法并尝试使用它,但是失败了。

发生此错误:

  

TypeError:不允许将tf.Tensor用作Python bool。使用if t is not None:代替if t:来测试是否定义了张量,并使用TensorFlow ops(例如tf.cond)执行以张量的值为条件的子图。

代码:

import gym
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

def one_hot(x):
    s=np.identity(600)
    b = s[x[0] * 20 + x[1] * 2 + x[2]]
    return b.reshape(1, 600)

def boolstr_to_floatstr(v): 
if v == True:
    return 1
elif v == False:
    return 0

env=gym.make('Blackjack-v0')
learning_rate=0.5


state_number=600
action_number=2
#######################################3
X=tf.placeholder(tf.float32, shape=[1,state_number], name='input_data')
W1=tf.Variable(tf.random_uniform([state_number,128],0,0.01))#network for update
layer1=tf.nn.tanh(tf.matmul(X,W1))


W2=tf.Variable(tf.random_uniform([128,256],0,0.01))
layer2=tf.nn.tanh(tf.matmul(layer1,W2))

W3=tf.Variable(tf.random_uniform([256,action_number],0,0.01))
Qpred=tf.matmul(layer2,W3) # Qprediction
#####################################################################3
X1=tf.placeholder(shape=[1,state_number],dtype=tf.float32)
W4=tf.Variable(tf.random_uniform([state_number,128],0,0.01))#network for target
layer3=tf.nn.tanh(tf.matmul(X1,W4))


W5=tf.Variable(tf.random_uniform([128,256],0,0.01))
layer4=tf.nn.tanh(tf.matmul(layer3,W5))

W6=tf.Variable(tf.random_uniform([256,action_number],0,0.01))
target=tf.matmul(layer4,W6) # target
#################################################################

update1=W4.assign(W1)
update2=W5.assign(W2)
update3=W6.assign(W3)

Y=tf.placeholder(shape=[1,action_number],dtype=tf.float32)

loss=tf.reduce_sum(tf.square(Y-Qpred))#cost(W)=(Ws-y)^2
train=tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)





num_episodes=1000 
dis=0.99 #discount factor
rList=[] #record the reward

init=tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    for i in range(num_episodes): #episode 만번
        s = env.reset()
        rALL = 0
        done = False
        e=1./((i/100)+1) #exploit or explore용 상수
        total_loss=[]
        while not done:  

            s = np.asarray(s)
            s[2] = boolstr_to_floatstr(s[2])
            #print(np.shape(one_hot(s)))
            #print(one_hot(s))
            Qs=sess.run(Qpred,feed_dict={X:one_hot(s).astype(np.float32)})


            if np.random.rand(1)<e:   #새로운 도전시도
                 a=env.action_space.sample()

            else:
                 a=np.argmax(Qs) #그냥 내가아는한 최댓값의 액션 선택



            s1,reward,done,_=env.step(a)  #
            s1=np.asarray(s1)
            s1[2]=boolstr_to_floatstr(s1[2])

            if done:
                Qs[0,a]=reward

            else:
                Qs1=sess.run(target,feed_dict={X1:one_hot(s1)})

                Qs[0,a]=reward+dis*np.max(Qs1) #optimal Q

            sess.run(train,feed_dict={X:one_hot(s),Y:Qs})
            if i%10==0: ##target 을 Qpredion으로 업데이트해줌
                sess.run(update1,update2,update3)

        if reward==1:
            rALL += reward
        else:
            rALL+=0
        s=s1

        rList.append(rALL)



print('success rate: '+ str(sum(rList)/num_episodes))
print("Final Q-table values")

我最后需要打印成功率。在DQN之前达到38%。如果考虑到其DQN算法,我的代码有问题,请告诉我。

1 个答案:

答案 0 :(得分:0)

如果要在不同网络之间共享权重,则只需使用范围DjangoObjectPermissions创建具有相同名称的图层,然后网络之间的权重将自动共享。