如何最小化张量流中的绝对差损失?

时间:2017-07-26 07:23:40

标签: tensorflow deep-learning

我试图在tensorflow中重现Flownet 1.0几天。重建并不难,但是在tensorboard上显示的Absolute Difference损失似乎陷入了循环。 enter image description here

您可能想知道的主要代码如下所示。

#iamges1_shape = iamges2_shape = [batch, 461, 589, 6]
def inference(iamges1, images2, flownet_groud_truth):
    with tf.device('/gpu:0'):
        iamges1 = iamges1 * 0.00392156862745
        images2 = images2 * 0.00392156862745
        inputs = tf.concat([iamges1, images2], axis = 3)
        conv1 = tf.contrib.layers.conv2d(inputs, 64, 5, [2, 2] )
        conv2 = tf.contrib.layers.conv2d(conv1, 128, 5, stride=[ 2, 2] )

        blablabla....

        flowloss = tf.losses.absolute_difference(regroud_truth, predict_flow )    
        final_flow = 20*final_flow 

        return final_flow, flowloss

lr = tf.train.exponential_decay(0.0001,
                                global_step,
                                10000,
                                0.99,
                                staircase=True)
opt = tf.train.AdamOptimizer(learning_rate=lr)
train_op = opt.minimize(loss, global_step=global_step)
sess = tf.Session(config=tf.ConfigProto(
        allow_soft_placement=True,
        log_device_placement=False))
sumwriter = tf.summary.FileWriter('/tmp/flow', graph=sess.graph)

threads = tf.train.start_queue_runners(sess )
sess.run(tf.global_variables_initializer())

for step in range(100000):
    gdt = flowIO.next_GroudTruthFlow_batch(gd_truth_name_list, 4)
    _ , total_loss, summary = sess.run([train_op, loss, merged ], feed_dict={gd_truth:gdt })
    print('---------', 'step %d' % step)
    print(' loss is %f ' % total_loss )
    if step % 200 == 0:
        sumwriter.add_summary(summary, step)

我也试过了其他的learning_rate,比如0.1,0.001等,甚至是Optimizer。但是,循环仍然在这里,只是形状不同。

考虑到太多丑陋的代码可能会伤害你的心情,我不会发布所有内容。如果有更多信息可以提供帮助,你就会得到它。

任何建议都将受到赞赏。非常感谢!

0 个答案:

没有答案