TypeError:无法将类型<class'list'>的对象转换为Tensor。内容:[无,-1,3]。考虑将元素强制转换为受支持的类型

时间:2019-08-01 14:54:27

标签: list tensorflow seq2seq

我在Tensorflow中遇到数据类型不匹配的错误。

我试图做:

prediction = tf.convert_to_tensor(prediction)
y = tf.convert_to_tensor(y)

在将其传递给损失函数之前

def train():
    print("Training")

    # tf Graph input
    x = tf.placeholder(dtype=tf.float32, shape=[None, config.input_window_size - 1, config.input_size], name="input_sequence")
    y = tf.placeholder(dtype=tf.float32, shape=[None, config.output_window_size, config.input_size], name="raw_labels")
    dec_in = tf.placeholder(dtype=tf.float32, shape=[None, config.output_window_size, config.input_size], name="decoder_input")

    labels = tf.transpose(y, [1, 0, 2])
    labels = tf.reshape(labels, [-1, config.input_size])
    labels = tf.split(labels, config.output_window_size, axis=0, name='labels')

    tf.set_random_seed(112858)


    # Define model
    prediction = models.seq2seq(x, dec_in, config, True)

    sess = tf.Session()

    loss = eval('loss_functions.lie_loss(prediction, labels, config)')

    # Add a summary for the loss
    train_loss = tf.summary.scalar('train loss', loss)
    valid_loss = tf.summary.scalar('valid loss', loss)

丢失功能

def lie_loss(prediction, y, config):
    # Compute the joint discrepancy following forward kinematics of lie parameters

    prediction = tf.concat(prediction, axis=0)
    y = tf.concat(y, axis=0)

    joint_pred = forward_kinematics(prediction, config)
    joint_label = forward_kinematics(y, config)
    loss = tf.reduce_mean(tf.square(tf.subtract(joint_pred, joint_label)))

    return loss

我得到错误 TypeError:无法将类型的对象转换为Tensor。内容:[无,-1,3]。考虑将元素强制转换为受支持的类型。

在将预测和y更改为张量时,我在Forward_kinematics中遇到以下错误:

joint_pred = forward_kinematics(prediction, config)
Prediction/src/loss_functions.py", line 101, in forward_kinematics
    for i in range(omega[0].shape[0]):

TypeError: __index__ returned non-int (type NoneType)

forward_kinematics的功能如下:

def forward_kinematics(lie_parameters, config):
    print(lie_parameters)
    nframes = lie_parameters.get_shape().as_list()[0]
    print("nframs")
    print(nframes)
    # nframes = lie_parameters.shape[0]
    lie_parameters = tf.reshape(lie_parameters, [nframes, -1, 3])

    R = []
    idx = config.idx
    chain_idx = config.chain_idx
    # config bone params are retrieved from read_data.py

    bone_params = config.bone_params
    for h in range(nframes):
        omega = []
        A = []
        chain = []
        for i in range(len(idx) - 1):
            chain.append(tf.concat([lie_parameters[h, idx[i]:idx[i + 1]], tf.zeros([1, 3])], axis=0))

        omega.append(tf.concat(chain, axis=0))

        ##### I have to check this omega
        print("Omega")
        print(type(omega))
        print(omega)
        print(omega[0])
        print(omega[0].shape)
        print(omega[0].shape[0])

        for i in range(omega[0].shape[0]):
            A.append([rotmat(omega[0][i])])
        R.append(tf.concat(A, axis=0))

    R = tf.stack(R)
    joints = []
    for h in range(nframes):
        jointlist = []
        for i in range(len(chain_idx)):
            for j in range(chain_idx[i].shape[0]):
                if j == 0:
                    if i < 3:
                        jointlist.append(tf.zeros([3, 1]))
                    else:
                        jointlist.append(joint_xyz[14])
                else:
                    k = j - 1
                    A = R[h, chain_idx[i][k]]
                    while k > 0:
                        k = k - 1
                        A = tf.matmul(R[h, chain_idx[i][k]], A)
                    jointlist.append(
                        tf.matmul(A, tf.reshape(bone_params[chain_idx[i][j]], [3, 1])) + joint_xyz[chain_idx[i][j - 1]])
                joint_xyz = tf.stack(jointlist)
        joints.append(tf.squeeze(joint_xyz))
    joints = tf.stack(joints)
    return joints

2 个答案:

答案 0 :(得分:0)

我正面临类似的问题。我先换位然后重塑,这可能对您有帮助。我的任务是展开张量。我必须计算神经样式转换中的内容成本。 enter image description here

 a_C_unrolled = tf.reshape(tf.transpose(a_C,perm=[0,3,1,2]),shape=[m,n_H*n_W,n_C])
    a_G_unrolled =tf.reshape( tf.transpose(a_G,perm=[0,3,1,2]),shape=[m,n_H*n_W,n_C])

“展开”的其他提示

To unroll the tensor, we want the shape to change from (m,nH,nW,nC)(m,nH,nW,nC) to (m,nH×nW,nC)(m,nH×nW,nC).

tf.reshape(tensor, shape) takes a list of integers that represent the desired output shape.

For the shape parameter, a -1 tells the function to choose the correct dimension size so that the output tensor still contains all the values of the original tensor.

So tf.reshape(a_C, shape=[m, n_H * n_W, n_C]) gives the same result as tf.reshape(a_C, shape=[m, -1, n_C]).

If you prefer to re-order the dimensions, you can use tf.transpose(tensor, perm), where perm is a list of integers containing the original index of the dimensions.

For example, tf.transpose(a_C, perm=[0,3,1,2]) changes the dimensions from (m,nH,nW,nC)(m,nH,nW,nC) to (m,nC,nH,nW)(m,nC,nH,nW).

There is more than one way to unroll the tensors.
Notice that it's not necessary to use tf.transpose to 'unroll' the tensors in this case but this is a useful function to practice and understand for other situations that you'll encounter.

答案 1 :(得分:0)

我明白问题出在哪里

当您将这些维度放入列表对象时,它会将 int32 转换为 Dimension(x) 您可以尝试使用 tensorflow 进行转换,但它对我不起作用,因为您再次需要将其转换为 Tensor ,所以您可以使用下面的代码在没有 tensorflow 的情况下进行转换

5

注意:-

你可以尝试使用 np.cast

a_C_unrolled =  tf.reshape(a_C, shape=tf.constant([int(m), int(n_H * n_W), int(n_C)]))
a_G_unrolled =  tf.reshape(a_G, shape=tf.constant([int(m), int(n_H * n_W), int(n_C)]))