如何在张量流中将函数映射到变量形状的权重张量

时间:2018-05-25 16:39:47

标签: python tensorflow

我正试图计算我的nn的损失。我有层数作为参数进入,我希望计算损失类似于:

  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits) + 
      L2_beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2))
  )

如果我将这些图层作为参数进入,则此版本将无效。我可以使用for循环来减轻所有的重量损失,但这并不优雅。我想将nn.l2_loss映射到列表weights的每个元素。但我不能让它工作!

import tensorflow as tf

weights = []
weights.append(tf.Variable(tf.truncated_normal([784, 1024])))
weights.append(tf.Variable(tf.truncated_normal([1024, 512])))
weights.append(tf.Variable(tf.truncated_normal([512, 10])))

print(weights)

# this works
tf.nn.l2_loss(weights[0]) + tf.nn.l2_loss(weights[1]) + tf.nn.l2_loss(weights[2])

# this is what I need
tf.map_fn(tf.nn.l2_loss, weights)

想法?

1 个答案:

答案 0 :(得分:1)

在下面的示例中,我只使用了常规map。不知道这是否与tf.map_fn一样好,但是没有 循环的工作。

import tensorflow as tf

weights = []
weights.append(tf.Variable(tf.truncated_normal([784, 1024])))
weights.append(tf.Variable(tf.truncated_normal([1024, 512])))
weights.append(tf.Variable(tf.truncated_normal([512, 10])))
init_op = tf.global_variables_initializer()

required=tf.nn.l2_loss(weights[0]) + tf.nn.l2_loss(weights[1]) + tf.nn.l2_loss(weights[2])    
required2=tf.reduce_sum(map(tf.nn.l2_loss,weights))

with tf.Session() as sess:
  sess.run(init_op)
  your_result=sess.run(required)
  my_result=sess.run(required2)

print 'your res ::{}, My res ::{}'.format(your_result,my_result)

代替python3使用:

required2=tf.reduce_sum(list(map(tf.nn.l2_loss,weights)))
相关问题