用我自己的网来训练在咖啡上的mnist:消息类型" caffe.LayerParameter"没有名为" lr_mult"

时间:2016-09-08 02:15:13

标签: machine-learning computer-vision neural-network deep-learning caffe

我写了一个网来训练caffe上的数据集MNIST,但遇到了错误:Message type "caffe.LayerParameter" has no field named "blogs_lr".我在互联网上搜索过,有人告诉我将blogs_lr更改为lr_mult,因为前一个是旧样式。我做到了,但错误仍未解决:

I0907 14:47:33.021236 23466 solver.cpp:81] Creating training net from train_net file: /home/pris/caffe-master/examples/mnist/my_lenet_train.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 22:10: Message type "caffe.LayerParameter" has no field named "lr_mult".
F0907 14:47:33.021351 23466 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/pris/caffe-master/examples/mnist/my_lenet_train.prototxt
*** Check failure stack trace: ***
    @     0x7fc9b530bdaa  (unknown)
    @     0x7fc9b530bce4  (unknown)
    @     0x7fc9b530b6e6  (unknown)
    @     0x7fc9b530e687  (unknown)
    @     0x7fc9b591b19e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7fc9b59097e7  caffe::Solver<>::InitTrainNet()
    @     0x7fc9b590a83c  caffe::Solver<>::Init()
    @     0x7fc9b590ab6a  caffe::Solver<>::Solver()
    @     0x7fc9b5a49663  caffe::Creator_SGDSolver<>()
    @           0x40e9be  caffe::SolverRegistry<>::CreateSolver()
    @           0x407b62  train()
    @           0x4059ec  main
    @     0x7fc9b4619f45  (unknown)
    @           0x406121  (unknown)
    @              (nil)  (unknown)
Aborted (core dumped)

这是我定义的网(my_lenet_train.prototxt):

name:"LeNet"
layer
{
  name:"mnist"
  type:"Data"
  data_param
  {
    source:"/home/pris/caffe-master/examples/mnist/mnist_train_lmdb"
    batch_size:64
    scale:0.00390625
  }
  top:"data"
  top:"label"
}

layer
{
  name:"conv1"
  type:"Convolution"
  bottom:"data"
  top:"conv1"
  lr_mult:1
  lr_mult:2
  convolution_param
  {  
    num_output:20
    kernel_size:5
    stride:1
    weight_filler  { type:"xavier" }
    bias_filler {type:"constant" }
  }
}


layer
{
  name:"pool1"
  type:"Pooling"
  bottom:"conv1"
  top:"pool1" 
  pooling_param
  {
    pool:MAX
    kernel_size:2
    stride:2
  }
}


layer
{
  name:"conv2"
  type:"Convolution"
  bottom:"pool1"
  top:"conv2"
  lr_mult:1
  lr_mult:2
  convolution_param
  {  
    num_output:50
    kernel_size:5
    stride:1
    weight_filler  { type:"xavier" }
    bias_filler {type:"constant" }
  }
}

layer
{
  name:"pool2"
  type:"Pooling"
  bottom:"conv2"
  top:"pool2" 
  pooling_param
  {
    pool:MAX
    kernel_size:2
    stride:2
  }
}

layer
{
  name:"ip1"
  type:"InnerProduct"
  lr_mult:1
  lr_mult:2
  inner_product_param
  {
    num_output:500
    weight_filler {type:"xavier"}
    bias_filler {type:"constant" }
  }
  bottom:"pool2"
  top:"ip1"
}

layer
{
  name:"relu1"
  type:"ReLU"
  bottom:"ip1"
  top:"ip1"
}

layer
{
  name:"ip2"
  type:"InnerProduct"
  lr_mult:1
  lr_mult:2
  inner_product_param
  {
    num_output:10
    weight_filler {type:"xavier"}
    bias_filler {type:"constant" }
  }
  bottom:"ip1"
  top:"ip2"
}

layer
{
 name:"loss"
 type:"SoftmaxWithLoss"
 bottom:"ip2"
 bottom:"label"
}


}

my_lenet_solver.prototxt:

train_net:"/home/pris/caffe-master/examples/mnist/my_lenet_train.prototxt"
test_net:"/home/pris/caffe-master/examples/mnist/lenet_train_test.prototxt"
test_iter:100
test_interval:500
base_lr:0.01
momentum:0.9
weight_decay:0.0005
lr_policy:"inv"
gamma:0.0001
power:0.75
display:100
max_iter:10000
snapshot:5000
snapshot_prefix:"/home/pris/caffe-master/examples/mnist/lenet"
solver_mode:1

train_lenet.sh:

#!/usr/bin/env sh

TOOLS=/home/pris/caffe-master/build/tools

$TOOLS/caffe train \
  --solver=/home/pris/caffe-master/examples/mnist/my_lenet_solver.prototxt
顺便说一句,我之前使用过imagenet来训练我自己的数据集(就像这里的那个:use caffe to train my own jpg datasets:type "caffe.ImageDataParameter" has no field named "backend" net的定义也有变量'lr_mult',但是当我运行时我从未遇到过这个错误它。我真的好奇为什么。我的计算机上只有一个版本的caffe。

1 个答案:

答案 0 :(得分:1)

'lr_mult'参数必须放在'param'标签内。检查here

layer {
  name: "conv1"
  type: "Convolution"
  param { lr_mult: 1 }
  param { lr_mult: 2 }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
  bottom: "data"
  top: "conv1"
}