损失增加微调咖啡

时间:2016-05-15 20:02:49

标签: deep-learning caffe loss

我有280个类的分类问题,大约278,000个图像。 我使用quick_solver.txt基于GoogleNet模型(caffe中的bvlc_googlenet)进行微调。 我的解决方案如下:

test_iter: 1000
test_interval: 4000
test_initialization: false
display: 40
average_loss: 40
base_lr: 0.001
lr_policy: "poly"
power: 0.5
max_iter: 800000
momentum: 0.9
weight_decay: 0.0002
snapshot: 20000

在训练期间,我使用批量大小为32,测试批次32也是如此。我只是通过重命名从头开始重新学习三层loss1 / classifier loss2 / classifier和loss3 / classifier。我将全球学习率设置为0.001,即比从头开始训练中使用的学习率低10倍。然而,最后三层仍然获得0.01的学习率。

第一次迭代的日志文件:

I0515 08:44:41.838122  1279 solver.cpp:228] Iteration 40, loss = 9.72169
I0515 08:44:41.838163  1279 solver.cpp:244]     Train net output #0: loss1/loss1 = 5.7261 (* 0.3 = 1.71783 loss)
I0515 08:44:41.838170  1279 solver.cpp:244]     Train net output #1: loss2/loss1 = 5.65961 (* 0.3 = 1.69788 loss)
I0515 08:44:41.838173  1279 solver.cpp:244]     Train net output #2: loss3/loss3 = 5.46685 (* 1 = 5.46685 loss)
I0515 08:44:41.838179  1279 sgd_solver.cpp:106] Iteration 40, lr = 0.000999975

直到第100,000次迭代,我的网获得50%的前1精度和~80%前5精度:

I0515 13:45:59.789113  1279 solver.cpp:337] Iteration 100000, Testing net (#0)
I0515 13:46:53.914217  1279 solver.cpp:404]     Test net output #0: loss1/loss1 = 2.08631 (* 0.3 = 0.625893 loss)
I0515 13:46:53.914274  1279 solver.cpp:404]     Test net output #1: loss1/top-1 = 0.458375
I0515 13:46:53.914279  1279 solver.cpp:404]     Test net output #2: loss1/top-5 = 0.768781
I0515 13:46:53.914284  1279 solver.cpp:404]     Test net output #3: loss2/loss1 = 1.88489 (* 0.3 = 0.565468 loss)
I0515 13:46:53.914288  1279 solver.cpp:404]     Test net output #4: loss2/top-1 = 0.494906
I0515 13:46:53.914290  1279 solver.cpp:404]     Test net output #5: loss2/top-5 = 0.805906
I0515 13:46:53.914294  1279 solver.cpp:404]     Test net output #6: loss3/loss3 = 1.77118 (* 1 = 1.77118 loss)
I0515 13:46:53.914297  1279 solver.cpp:404]     Test net output #7: loss3/top-1 = 0.517719
I0515 13:46:53.914299  1279 solver.cpp:404]     Test net output #8: loss3/top-5 = 0.827125

在第119,00次迭代中,一切仍然正常

I0515 14:43:38.669674  1279 solver.cpp:228] Iteration 119000, loss = 2.70265
I0515 14:43:38.669777  1279 solver.cpp:244]     Train net output #0: loss1/loss1 = 2.41406 (* 0.3 = 0.724217 loss)
I0515 14:43:38.669783  1279 solver.cpp:244]     Train net output #1: loss2/loss1 = 2.38374 (* 0.3 = 0.715123 loss)
I0515 14:43:38.669787  1279 solver.cpp:244]     Train net output #2: loss3/loss3 = 1.92663 (* 1 = 1.92663 loss)
I0515 14:43:38.669798  1279 sgd_solver.cpp:106] Iteration 119000, lr = 0.000922632

在此之后,损失突然增加,即等于初始损失(从8到9),

I0515 14:43:45.377710  1279 solver.cpp:228] Iteration 119040, loss = 8.3068
I0515 14:43:45.377751  1279 solver.cpp:244]     Train net output #0: loss1/loss1 = 5.77026 (* 0.3 = 1.73108 loss)
I0515 14:43:45.377758  1279 solver.cpp:244]     Train net output #1: loss2/loss1 = 5.76971 (* 0.3 = 1.73091 loss)
I0515 14:43:45.377763  1279 solver.cpp:244]     Train net output #2: loss3/loss3 = 5.70022 (* 1 = 5.70022 loss)
I0515 14:43:45.377768  1279 sgd_solver.cpp:106] Iteration 119040, lr = 0.000922605

在突然发生变化之后很长一段时间内网络无法减少损失

I0515 16:51:10.485610  1279 solver.cpp:228] Iteration 161040, loss = 9.01994
I0515 16:51:10.485649  1279 solver.cpp:244]     Train net output #0: loss1/loss1 = 5.63485 (* 0.3 = 1.69046 loss)
I0515 16:51:10.485656  1279 solver.cpp:244]     Train net output #1: loss2/loss1 = 5.63484 (* 0.3 = 1.69045 loss)
I0515 16:51:10.485661  1279 solver.cpp:244]     Train net output #2: loss3/loss3 = 5.62972 (* 1 = 5.62972 loss)
I0515 16:51:10.485666  1279 sgd_solver.cpp:106] Iteration 161040, lr = 0.0008937

我重新运行了两次实验,它只是在第119040次迭代时完全重复。有关更多信息,我在创建LMDB数据库时进行了数据调整。我使用这个数据库训练VGG-16(步进学习率策略,最大80k迭代,每步20k iters)没有任何问题。使用VGG,我获得了55%的前1精度。

有人遇到类似的问题吗?

0 个答案:

没有答案