具有不同输入和输出尺寸的Tensorflow自定义渐变

时间:2018-08-29 17:04:11

标签: python tensorflow gradient

我尝试为Tensorflow定义新的操作数和渐变。 我找到以下链接: https://gist.github.com/harpone/3453185b41d8d985356cbe5e57d67342

只要输入尺寸等于输出尺寸,它就可以正常工作。 我想发送2个大小为(1,)的参数,并分别获取大小为N =(65536,)的op和每个大小都为N =(65536,)的相同梯度。

我的输入是:x(1,),y(1,)

输出:

op N=(65536,)

grad of x: N=(65536,)

grad of y: N=(65536,)

或 输入面值(2,) 输出grad N =(65536,2)

在训练中,我将使用reduce_sum,因此我将获得2个数字(作为参数),并且梯度下降应正确进行。

但是它不起作用,并且我收到以下消息:

Traceback (most recent call last):
  File "tfTmp.py", line 97, in <module>
    gr = tf.gradients(z, [x,y])
  File "/home/user/.local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 532, in gradients
    gate_gradients, aggregation_method, stop_gradients)
  File "/home/user/.local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 734, in _GradientsHelper
    (op.name, i, t_in.shape, in_grad.shape))
ValueError: Incompatible shapes between op input and calculated input gradient.  Forward operation: myOp.  Input index: 0. Original input shape: (1,).  Calculated input gradient shape: (65536,)

我的代码:

import...
def my_op(x,y):
    op_output=getOutput(x,y) # size of N=(65536,)
    return op_output

def callGrad(op,grad):
    x = op.input[0]
    y = op.input[1]
    Gx=calculateGradx(x,y); # size of N=(65536,);
    Gy=calculateGrady(x,y); # size of N=(65536,);

    return Gx,Gy

def pyfunc(func,inp,Tout,stateful=True,name=None,grad=None)
    rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))

    tf.RegisterGradient(rnd_name)(grad)
    g= tf.get_default_graph()
    with g.gradient_override_map({"PyFuc": rnd_name});
        return tf.py_func(func, inp, Tout, stateful=stateful, name=name)

from tensorflow.python.framework import ops

def tf_ops(x,y, name=None)
    with ops.name_scope(name, "myOp", [x,y]) as name:
        z = py_func(my_op,

[x,y],[tf.float32],name=name,grad=callGrad);
            return z[0];

with tf.Session() as sess:
    N=1; # for N=65536 it works but I want only one parameter
    x=tf.constant(np.ones(N)); 
    y=tf.constant(np.ones(N));

    z=tf_op(x,y);
    gr=tf.gradients(z,[x,y]);

    init = tf.global_variables_initializer();
    sess.run(init)

    print(x.eval(), y.eval(),z.eval(), gr[0].eval(), gr[1].eval())

重要说明:我不想在渐变中执行reduce_sum,因为我将使用自动求差,因此我想获得以下成本函数: sum((ref-OP(x,y))^ 2)

渐变 -2 * sum((ref-OP(x,y))* dOp / dx)),-2 * sum((ref-OP(x,y))* dOp / dy))

因此dOp / dx应该与Op相同,即N =(65536,)

0 个答案:

没有答案