所以我试图创建一个非常简单的神经网络,没有隐藏层,只有输入(3个元素)和线性输出(2个元素)。
然后我定义了一些变量来存储配置和权重
<system.webServer>
<httpErrors errorMode="Custom" existingResponse="PassThrough" />
</system.webServer>
然后我创建了培训网络
# some configs
input_size = 3
action_size = 2
min_delta, max_delta = -1, 1
learning_rate_op = 0.5
w = {} # weights
我定义优化器以最小化目标值和训练网络之间的平方差
# training network
with tf.variable_scope('prediction'):
state_tensor = tf.placeholder('float32', [None, input_size], name='state_tensor')
w['q_w'] = tf.get_variable('Matrix', [state_tensor.get_shape().as_list()[1], action_size], tf.float32, tf.random_normal_initializer(stddev=0.02))
w['q_b'] = tf.get_variable('bias', [action_size], initializer=tf.constant_initializer(0))
q = tf.nn.bias_add(tf.matmul(state_tensor, w['q_w']), w['q_b'])
最后,我在无限循环中运行一些值。但是,权重永远不会更新,它们会保留用于初始化的随机值
# weight optimizer
with tf.variable_scope('optimizer'):
# tensor to hold target value
# eg, target_q_tensor=[10;11]
target_q_tensor = tf.placeholder('float32', [None], name='target_q_tensor')
# tensors for action_tensor, for action_tensor matrix and for value deltas
# eg, action_tensor=[0;1], action_one_hot=[[1,0];[0,1]], q_acted=[Q_0,Q_1]
action_tensor = tf.placeholder('int64', [None], name='action_tensor')
action_one_hot = tf.one_hot(action_tensor, action_size, 1.0, 0.0, name='action_one_hot')
q_acted = tf.reduce_sum(q * action_one_hot, reduction_indices=1, name='q_acted')
# delta
delta = target_q_tensor - q_acted
clipped_delta = tf.clip_by_value(delta, min_delta, max_delta, name='clipped_delta')
# error function
loss = tf.reduce_mean(tf.square(clipped_delta), name='loss')
# optimizer
# optim = tf.train.AdamOptimizer(learning_rate_op).minimize(loss)
optim = tf.train.GradientDescentOptimizer(learning_rate_op).minimize(loss)
我从一个正在运行的DQN实现中获取了代码,因此我认为我正在做一些明显错误的事情。网络应该汇聚到:
with tf.Session() as sess:
tf.initialize_all_variables().run()
s_t = np.array([[1,0,0],[1,0,1],[1,1,0],[1,0,0]])
action = np.array([0, 1, 0, 1])
target_q = np.array([10, -11, -12, 13])
while True:
if counter % 10000 == 0:
q_values = q.eval({state_tensor: s_t})
for i in range(len(s_t)):
print("q", q_values[i])
print("w", sess.run(w['q_w']), '\nb', sess.run(w['q_b']))
sess.run(optim, {target_q_tensor: target_q, action_tensor: action, state_tensor: s_t})
但他们根本不会改变。有什么指针吗?
事实证明削减损失导致问题。但是,我不明白为什么......
答案 0 :(得分:1)
如果你的损失总是1,那么这意味着你的clipped delta
总是将它削减到1.这对我来说是一个奇怪的选择,无论如何都要削减损失。也许你的意思是clip the gradient of the loss?另见this。
在简单的情况下,完全删除剪辑(可能)也会起作用。