solffe.prototxt for caam中的adam solver

时间:2017-01-15 07:38:42

标签: caffe solver gradient-descent

我使用adam的solver.prototxt如下。我是否需要添加或删除任何条款?损失似乎没有减少

net: "/home/softwares/caffe-master/examples/hpm/hp.prototxt"
test_iter: 6
test_interval: 1000
base_lr: 0.001
momentum: 0.9
momentum2: 0.999
delta: 0.00000001
lr_policy: "fixed"
regularization_type: "L2"
stepsize: 2000
display: 100
max_iter: 20000
snapshot: 1000

snapshot_prefix: "/home/softwares/caffe-master/examples/hpm/hp"
type: "Adam"
solver_mode: GPU

3 个答案:

答案 0 :(得分:1)

caffe example on mnist进行比较,' stepsize'可以删除,因为' lr_policy'已修复'。

答案 1 :(得分:1)

你的工作怎么样?如果你使用亚当。我建议你看看caffe的设置。我不知道为什么你有L2和delta值。 This是标准设置

# The train/test net protocol buffer definition
# this follows "ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION"
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# All parameters are from the cited paper above
base_lr: 0.001
momentum: 0.9
momentum2: 0.999
# since Adam dynamically changes the learning rate, we set the base learning
# rate to a fixed value
lr_policy: "fixed"
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
type: "Adam"
solver_mode: GPU

答案 2 :(得分:0)

尝试学习率为0.1和较慢的步长(如300)并观察行为,同时检查lmdb / hdf5文件是否格式正确并且具有正确的比例以便于学习,您可以通过生成表示数据集上的文件。