反卷积层FCN初始化 - 丢失率下降得太快

时间:2017-07-23 08:12:22

标签: caffe deconvolution bilinear-interpolation

我正在训练一个小的(12K图像上的10M重物)FCN(参见例如Long等人,2015)。该体系结构如下(它以FCN8s fc7层开头):

fc7->relu1->dropout->conv2048->conv1024->conv512->deconv1->deconv2->deconv3->deconv4->deconv5->crop->softmax_with_loss

当我用高斯权重初始化所有deconv层时,我得到了一些(虽然不总是)合理的结果。然后我决定以正确的方式做,并使用Shelhamer提供的脚本(例如https://github.com/zeakey/DeepSkeleton/blob/master/examples/DeepSkeleton/solve.py

Deconvolution图层看起来像这样(第一个):

layer {
  name: "upscore2"
  type: "Deconvolution"
  bottom: "upsample"
  top: "upscore2"
  param {
    lr_mult: 2
  }
   convolution_param {
    # num output: number of channels, our cow+bgr
    num_output: 2
    kernel_size: 8
    stride: 2
    bias_term: false
  }
}

我得到的输出非常奇怪:快速丢失(1000代),并且保持在1左右,但模型在测试集上完全没用。有什么建议?我降低了学习率,但似乎没有任何效果。

net: "mcn-train_finetune11_slow_bilinear.prototxt"
solver_mode: GPU
# REDUCE LEARNING RATE
base_lr: 1e-8
lr_policy: "fixed"
iter_size: 1
max_iter: 100000
# REDUCE MOMENTUM TO 0.5
momentum: 0.5
weight_decay: 0.016
test_interval: 1000
test_iter: 125
display: 1000
average_loss: 1000
type: "Nesterov"
snapshot: 1000
snapshot_prefix: "mcn_finetune11_slow_bilinear"
debug_info: false

PS:短期打印培训

I0723 08:38:56.772249 29191 solver.cpp:272] Solving MyCoolNetwork, MCN
I0723 08:38:56.772260 29191 solver.cpp:273] Learning Rate Policy: fixed
I0723 08:38:56.775032 29191 solver.cpp:330] Iteration 0, Testing net (#0)
I0723 08:39:02.331010 29191 blocking_queue.cpp:49] Waiting for data
I0723 08:39:18.075814 29191 solver.cpp:397]     Test net output #0: loss = 37.8394 (* 1 = 37.8394 loss)
I0723 08:39:18.799008 29191 solver.cpp:218] Iteration 0 (-2.90699e-35 iter/s, 22.0247s/1000 iters), loss = 42.4986
I0723 08:39:18.799057 29191 solver.cpp:237]     Train net output #0: loss = 42.4986 (* 1 = 42.4986 loss)
I0723 08:39:18.799067 29191 sgd_solver.cpp:105] Iteration 0, lr = 1e-08
I0723 08:46:12.581365 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 08:46:12.773717 29200 data_layer.cpp:73] Restarting data prefetching from start.
I0723 08:51:14.609473 29191 solver.cpp:447] Snapshotting to binary proto file mcn_finetune11_slow_bilinear_iter_1000.caffemodel
I0723 08:51:15.245028 29191 sgd_solver.cpp:273] Snapshotting solver state to binary proto file mcn_finetune11_slow_bilinear_iter_1000.solverstate
I0723 08:51:15.298612 29191 solver.cpp:330] Iteration 1000, Testing net (#0)
I0723 08:51:20.888267 29203 data_layer.cpp:73] Restarting data prefetching from start.
I0723 08:51:21.194495 29202 data_layer.cpp:73] Restarting data prefetching from start.
I0723 08:51:36.276700 29191 solver.cpp:397]     Test net output #0: loss = 1.18519 (* 1 = 1.18519 loss)
I0723 08:51:36.886041 29191 solver.cpp:218] Iteration 1000 (1.35488 iter/s, 738.075s/1000 iters), loss = 3.89015
I0723 08:51:36.887783 29191 solver.cpp:237]     Train net output #0: loss = 1.82311 (* 1 = 1.82311 loss)
I0723 08:51:36.887807 29191 sgd_solver.cpp:105] Iteration 1000, lr = 1e-08
I0723 08:53:34.997433 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 08:53:35.040670 29200 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:00:35.779531 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:00:35.791441 29200 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:03:31.710410 29191 solver.cpp:447] Snapshotting to binary proto file mcn_finetune11_slow_bilinear_iter_2000.caffemodel
I0723 09:03:32.383363 29191 sgd_solver.cpp:273] Snapshotting solver state to binary proto file mcn_finetune11_slow_bilinear_iter_2000.solverstate
I0723 09:03:32.09 29203 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:03:44.351140 29202 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:03:52.166584 29191 solver.cpp:397]     Test net output #0: loss = 1.14507 (* 1 = 1.14507 loss)
I0723 09:03:52.777982 29191 solver.cpp:218] Iteration 2000 (1.35892 iter/s, 735.881s/1000 iters), loss = 2.60843
I0723 09:03:52.778029 29191 solver.cpp:237]     Train net output #0: loss = 3.07199 (* 1 = 3.07199 loss)
I0723 09:03:52.778038 29191 sgd_solver.cpp:105] Iteration 2000, lr = 1e-08
I0723 09:07:57.400295 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:07:57.448870 29200 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:14:58.070508 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:14:58.100841 29200 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:15:48.708067 29191 solver.cpp:447] Snapshotting to binary proto file mcn_finetune11_slow_bilinear_iter_3000.caffemodel
I0723 09:15:49.358572 29191 sgd_solver.cpp:273] Snapshotting solver state to binary proto file mcn_finetune11_slow_bilinear_iter_3000.solverstate
I0723 09:15:49.411862 29191 solver.cpp:330] Iteration 3000, Testing net (#0)
I0723 09:16:05.268878 29203 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:16:05.502995 29202 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:16:08.177001 29191 solver.cpp:397]     Test net output #0: loss = 1.115 (* 1 = 1.115 loss)
I0723 09:16:08.767503 29191 solver.cpp:218] Iteration 3000 (1.35874 iter/s, 735.979s/1000 iters), loss = 2.57038
I0723 09:16:08.768218 29191 solver.cpp:237]     Train net output #0: loss = 2.33784 (* 1 = 2.33784 loss)
I0723 09:16:08.768534 29191 sgd_solver.cpp:105] Iteration 3000, lr = 1e-08
I0723 09:22:16.315538 29201 data_layer.cpp:73] Restarting data prefetching from start.
I0723 09:22:16.349555 29200 data_layer.cpp:73] Restarting data prefetching from start.

0 个答案:

没有答案