在Tensorflow DQN中显示损失而没有离开tf.Session()

时间:2019-03-25 01:33:41

标签: python tensorflow q-learning cross-entropy

我已经设置好DQN并正常工作,但是我不知道如何在不离开Tensorflow会话的情况下显示损失。

我首先以为它涉及到创建一个新的函数或类,但是我不确定将其放在代码中的哪个位置,以及具体在函数或类中放置什么。

observations = tf.placeholder(tf.float32, shape=[None, num_stops], name='observations')
actions = tf.placeholder(tf.int32,shape=[None], name='actions')
rewards = tf.placeholder(tf.float32,shape=[None], name='rewards')

# Model
Y = tf.layers.dense(observations, 200, activation=tf.nn.relu)
Ylogits = tf.layers.dense(Y, num_stops)

# sample an action from predicted probabilities
sample_op = tf.random.categorical(logits=Ylogits, num_samples=1)


# loss
cross_entropies = tf.losses.softmax_cross_entropy(onehot_labels=tf.one_hot(actions,num_stops), logits=Ylogits)

loss = tf.reduce_sum(rewards * cross_entropies)

# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=.99)
train_op = optimizer.minimize(loss)

然后我运行网络,该网络将正常运行。

with tf.Session() as sess:

'''etc. The network is run'''

sess.run(train_op, feed_dict={observations: observations_list,
                             actions: actions_list,
                             rewards: rewards_list})

我想向用户显示loss中的train_op

1 个答案:

答案 0 :(得分:0)

尝试

loss, _ = sess.run([loss, train_op], feed_dict={observations: observations_list,
                             actions: actions_list,
                             rewards: rewards_list})