用AlexNet训练MNIST

时间:2018-03-15 18:03:54

标签: caffe mnist

我是Caffe的初学者。我已经使用AlexNet和随后的教程完成了使用LeNet和ImageNet的MNIST培训,并取得了相当不错的成绩。然后我尝试用AlexNet模型训练MNIST。火车模型与models/bvlc_alexnet/train_val.prototxt几乎相同,但在某处改变了:

layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
 }
  transform_param {
    mirror: false` <--------------- set to false,  and delete crop_size and  mean_file 

  }
  data_param {
    source: "./mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}

...

layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
 }
  transform_param {
    mirror: false  <-------- set to false,  and delete crop_size and  mean_file         
 }
  data_param {
    source: "./mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}

...

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 96
    kernel_size:3    <--------------------  changed to 3
    stride: 2            <--------------------  changed to 2
   weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
     value: 0
    }
  }
}

...

layer {
  name: "fc8"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 10      <--------------------  changed to 10

 `   weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

,solver.prototxt如下

net: "./train_val.prototxt"
test_iter: 1000
test_interval: 100
base_lr: 0.01
lr_policy: "inv"
power: 0.75
gamma: 0.1
stepsize: 1000
display: 100
max_iter: 100000
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "./caffe_alexnet_train"
solver_mode: GPU

经过100,000次迭代训练后,准确度达到约0.97

I0315 19:28:54.827383 26505 solver.cpp:258]     Train net output #0: loss = 0.0331752 (* 1 = 0.0331752 loss)`

`......`

I0315 19:28:56.384718 26505 solver.cpp:351] Iteration 100000, Testing net (#0)
I0315 19:28:58.121800 26505 solver.cpp:418]     Test net output #0: accuracy = 0.974875
I0315 19:28:58.121834 26505 solver.cpp:418]     Test net output #1: loss = 0.0804802 (* 1 = 0.0804802 loss)

然后我使用python脚本预测测试集中的单个图片

import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import caffe

caffe_root = '/home/ubuntu/pkg/local/caffe'

sys.path.insert(0, caffe_root + 'python')
MODEL_FILE = './deploy.prototxt' 

PRETRAINED = './caffe_alexnet_train_iter_100000.caffemodel'

IMAGE_FILE = './4307.png'

input_image = caffe.io.load_image(IMAGE_FILE, color=False)

net = caffe.Classifier(MODEL_FILE, PRETRAINED)

prediction = net.predict([input_image], oversample = False)

caffe.set_mode_cpu()

print( 'predicted class: ', prediction[0].argmax() )
print( 'predicted class all: ', prediction[0] )

但预测错了。 (这个脚本在使用LeNet的MNIST上预测很好) 并且每个班级的概率也是奇数

predicted class:  9   <------------- the correct label is 5

predicted class all:  [0.01998338 0.14941786 0.09392905 0.07361069 0.07640345 0.10996494 0.03646726 0.12371133 0.15246753 0.16404454]

** deploy.prototxt几乎与models/bvlc_alexnet/deploy.prototxt相同,但更改了train_val.prototxt中的相同位置

有什么建议吗?

1 个答案:

答案 0 :(得分:0)

AlexNet旨在区分1000个类,每个训练1.3M输入图像(规范)256x256x3数据值。您使用基本相同的工具来处理10个具有28x28x1输入的类。

非常简单,您已经过度拟合设计

如果您想使用一般的AlexNet设计来处理更简单的工作,您需要适当地缩小它。需要进行一些实验才能找到一个可行的定义&#34;适当地&#34;:通过某种因素缩小转换层,添加一个退出,完全删除一个转换,...​​