How can I reuse this tensorflow layer ?

时间:2018-07-24 10:08:26

标签: python tensorflow

I'm writing a model in Tensorflow that I was used to use with PyTorch. Some mechanisms being very much different, I'm stuck at some points. In particular:

dense = tf.layers.dense
adam = tf.train.AdamOptimizer
nb_joints= 3 
code_size = 8

joints_info = tf.placeholder(tf.float32, shape = [None,nb_joints], name = 'joints_state')
target_info = tf.placeholder(tf.float32, shape = [None,2], name = 'target_pos')

next_joint_info = tf.placeholder(tf.float32, shape = [None,nb_joints], name = 'next_joints_state')

with tf.variable_scope('Encoder'): 

    e1 = dense(joints_info, 32, activation = tf.nn.relu, name ='encoding_1')
    code = dense(e1, code_size, activation = tf.nn.relu, name ='code')

    d1 = dense(code, code_size, activation = tf.nn.relu, name ='decoding_1')
    recon = dense(d1, code_size, activation = tf.nn.relu, name ='reconstructed')

    with tf.variable_scope('EncoderLoss'):

        encoder_loss = tf.squared_difference(joints_info, recon)
        train_encoder = adam(3e-4).minimize(encoder_loss)

with tf.variable_scope('Task'): 

    t1 = dense(code, 32, activation = tf.nn.relu, name ='task_code')

    t1_targ = dense(target_info, 32, activation = tf.nn.relu, name ='task_target')

    task_joint = tf.concat([t1,t1_targ],1, name ='States_concatenation')

    t2 = dense(task_joint, 128, activation = tf.nn.relu, name = 'task_joint_transformation')

    task_prediction = dense(t2, code_size, activation = None, name = 'task_prediction')

    with tf.variable_scope('TaskLoss'):

        task_real = here, I want to call the CODE operation from the encoder but using next_joint_info placeholder
        task_loss = tf.squared_difference(task_prediction, task_real)

Could someone point me in the right direction ? I have no idea on how to proceed here.

Thanks a lot !

0 个答案:

没有答案