如何理解Cifar10预测输出?

时间:2015-08-12 18:09:58

标签: c++ deep-learning caffe conv-neural-network

我已经训练Cifar10caffe)模型进行两类分类。行人和非行人。训练看起来很好,我在caffemodel文件中更新了权重。我为行人使用了两个标签1,为非行人使用了两个标签,并为行人(64 x 160)和背景图像(64 x 160)使用了图像。 在训练之后,我用正图像(行人图像)和负图像(背景图像)进行测试。我的测试prototxt文件如下所示

name: "CIFAR10_quick_test"
layers 
{
  name: "data"
  type: MEMORY_DATA
  top: "data"
  top: "label"
  memory_data_param 
  {
    batch_size: 1
    channels: 3
    height: 160
    width: 64
  }
  transform_param 
  {
    crop_size: 64
    mirror: false
    mean_file: "../../examples/cifar10/mean.binaryproto"
  }
}
layers {
  name: "conv1"
  type: CONVOLUTION
  bottom: "data"
  top: "conv1"
  blobs_lr: 1
  blobs_lr: 2
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layers {
  name: "pool1"
  type: POOLING
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "relu1"
  type: RELU
  bottom: "pool1"
  top: "pool1"
}
layers {
  name: "conv2"
  type: CONVOLUTION
  bottom: "pool1"
  top: "conv2"
  blobs_lr: 1
  blobs_lr: 2
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layers {
  name: "relu2"
  type: RELU
  bottom: "conv2"
  top: "conv2"
}
layers {
  name: "pool2"
  type: POOLING
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "conv3"
  type: CONVOLUTION
  bottom: "pool2"
  top: "conv3"
  blobs_lr: 1
  blobs_lr: 2
  convolution_param {
    num_output: 64
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layers {
  name: "relu3"
  type: RELU
  bottom: "conv3"
  top: "conv3"
}
layers {
  name: "pool3"
  type: POOLING
  bottom: "conv3"
  top: "pool3"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "ip1"
  type: INNER_PRODUCT
  bottom: "pool3"
  top: "ip1"
  blobs_lr: 1
  blobs_lr: 2
  inner_product_param {
    num_output: 64
  }
}
layers {
  name: "ip2"
  type: INNER_PRODUCT
  bottom: "ip1"
  top: "ip2"
  blobs_lr: 1
  blobs_lr: 2
  inner_product_param {
    num_output: 10
  }
}
layers {
  name: "prob"
  type: SOFTMAX
  bottom: "ip2"
  top: "prob"
}

为了进行测试,我使用了test_predict_imagenet.cpp并对路径和图像大小进行了一些修改。

我无法弄清楚测试输出。当我用正像测试时,我得到了输出

I0813 01:55:30.378114  7668 test_predict_cifarnet.cpp:72] 1
I0813 01:55:30.379082  7668 test_predict_cifarnet.cpp:72] 3.90971e-007
I0813 01:55:30.381088  7668 test_predict_cifarnet.cpp:72] 0.00406029
I0813 01:55:30.383090  7668 test_predict_cifarnet.cpp:72] 0.995887
I0813 01:55:30.384119  7668 test_predict_cifarnet.cpp:72] 1.96203e-006
I0813 01:55:30.385095  7668 test_predict_cifarnet.cpp:72] 3.50333e-005
I0813 01:55:30.386119  7668 test_predict_cifarnet.cpp:72] 1.2796e-008
I0813 01:55:30.387097  7668 test_predict_cifarnet.cpp:72] 1.48836e-005
I0813 01:55:30.389093  7668 test_predict_cifarnet.cpp:72] 1.12237e-007
I0813 01:55:30.390100  7668 test_predict_cifarnet.cpp:72] 4.71238e-008
I0813 01:55:30.391101  7668 test_predict_cifarnet.cpp:72] 9.04134e-008

当我用负片图像测试时,我得到输出为

I0813 01:53:40.896139 10856 test_predict_cifarnet.cpp:72] 1
I0813 01:53:40.897117 10856 test_predict_cifarnet.cpp:72] 6.20882e-006
I0813 01:53:40.898115 10856 test_predict_cifarnet.cpp:72] 7.10468e-005
I0813 01:53:40.900184 10856 test_predict_cifarnet.cpp:72] 0.999911
I0813 01:53:40.901185 10856 test_predict_cifarnet.cpp:72] 3.4275e-006
I0813 01:53:40.902189 10856 test_predict_cifarnet.cpp:72] 2.38526e-007
I0813 01:53:40.903192 10856 test_predict_cifarnet.cpp:72] 2.29073e-007
I0813 01:53:40.905187 10856 test_predict_cifarnet.cpp:72] 1.7243e-006
I0813 01:53:40.906188 10856 test_predict_cifarnet.cpp:72] 5.40765e-007
I0813 01:53:40.908195 10856 test_predict_cifarnet.cpp:72] 1.57534e-006
I0813 01:53:40.909195 10856 test_predict_cifarnet.cpp:72] 3.72312e-006

如何理解测试输出?

是否有更有效的测试算法用于从视频输入(视频剪辑中逐帧)测试模型?

1 个答案:

答案 0 :(得分:2)

为什么最后一层num_output: 10ip2?你只需要2路分类器吗?为什么使用标签1和2而不是0和1?

你得到了什么:你有11个输出:一个是数据层的"label"输出,另外10个输出是softmax层的10矢量输出。目前还不清楚10矢量的值是什么,因为你只使用两个标签训练,因此10个条目中有8个完全没有受到监督。此外,从第一个输出来看,似乎两个测试都是带有标签1而不是2的样本。

你应该做什么:
1.将最顶层的完全连接层更改为只有两个输出(我也更改了格式以匹配新版本的protobuff)

layer {
  name: "ip2/pedestrains"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 2 # This is what you need changing
  }
}

2。将训练数据中的二进制标签更改为0/1而不是1/2。

现在你可以再次训练,看看你得到了什么。