我有一个这样的矩阵:
mat1 = tf.Variable([[0. 0. 0. 0. ]
[0.7 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]])
mat1 = mat1 + abs(mat1)/2
另外,像这样的索引矩阵:
prob_indice = tf.constant([[0 1]
[0 3]
[1 1]
[1 2]
[1 3]
[5 0]
[5 1]
[5 2]
[5 3]
[6 1]
[6 3]])
energy_allocation = 0.05
现在,我想对mat1
中energy_allocation
中的元素进行归纳,其中索引位于prob_indice
中。
所以预期的输出将是:
[[0 0.05 0 0.05 ]
[0.7 0.05 0.05 0.05 ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0.05 0.05 0.05 0.05 ]
[0. 0.05 0. 0.05 ]]
更新1
mat1是按照mat1 = x + abs(x)/2
的方式计算的,这就是为什么如果我使用tf.scatter_nd_add
会产生此错误的原因:
返回ref._lazy_read(gen_state_ops.resource_scatter_nd_add(# pylint:disable =受保护的访问AttributeError: 'tensorflow.python.framework.ops.EagerTensor'对象没有属性 '_lazy_read'
谢谢!
答案 0 :(得分:1)
您需要tf.scatter_nd_add()
。
import tensorflow as tf
mat1 = tf.Variable([[0. ,0. ,0. ,0.],
[0.7 ,0. , 0., 0. ],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],])
prob_indice = tf.constant([[0 ,1],
[0, 3],
[1, 1],
[1, 2],
[1, 3],
[5, 0],
[5, 1],
[5, 2],
[5, 3],
[6, 1],
[6, 3]])
energy_allocation = 0.05
result = tf.scatter_nd_add(mat1,
prob_indice,
energy_allocation*tf.ones(shape=(prob_indice.shape[0])))
# if your mat1 is tf.Tensor,you can use tf.scatter_nd to achieve it.
# result = tf.scatter_nd(prob_indice,
# energy_allocation * tf.ones(shape=(prob_indice.shape[0])),
# mat1.shape) + mat1
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(result))
# [[0. 0.05 0. 0.05]
# [0.7 0.05 0.05 0.05]
# [0. 0. 0. 0. ]
# [0. 0. 0. 0. ]
# [0. 0. 0. 0. ]
# [0.05 0.05 0.05 0.05]
# [0. 0.05 0. 0.05]]
更新:
您可以在张量流tf.tensor_scatter_nd_add()
中使用tf.scatter_nd_add()
代替version=2
。