caffe:“检查失败:状态== CUDNN_STATUS_SUCCESS(3对0)CUDNN_STATUS_BAD_PARAM”在训练期间

时间:2018-03-19 09:04:47

标签: deep-learning caffe cudnn

我正在进入带有caffe的网络编程,因为我习惯了更舒适和“懒惰”的解决方案,我对可能发生的问题感到有些不知所措。

现在我收到了错误 Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM

众所周知,这是由坏的cuda或cudnn版本制作的。 所以我检查了这些,他们是最新的。 (Cuda:8.0.61 Cudnn:6.0.21)

因为我只会在添加此ReLU图层时出现此错误,我认为这是由于我混淆参数引起的:

layer{
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "relu1"
}

为了向您提供所有信息,以下是我收到的错误消息:

I0319 09:41:09.484148  6909 solver.cpp:44] Initializing solver from parameters:
test_iter: 10
test_interval: 1000
base_lr: 0.001
display: 20
max_iter: 800
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.04
stepsize: 200
snapshot: 10000
snapshot_prefix: "models/train"
solver_mode: GPU
net: "train_val.prototxt"
I0319 09:41:09.484392  6909 solver.cpp:87] Creating training net from net file: train_val.prototxt
I0319 09:41:09.485164  6909 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer feed2
I0319 09:41:09.485183  6909 net.cpp:51] Initializing net from parameters:
name: "CaffeNet"
state {
  phase: TRAIN
}
layer {
  name: "feed"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  hdf5_data_param {
    source: "train_h5_list.txt"
    batch_size: 50
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 1
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "pool1"
  top: "relu1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "relu1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "conv2"
  top: "ip2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  inner_product_param {
    num_output: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "sig1"
  type: "Sigmoid"
  bottom: "ip2"
  top: "sig1"
}
layer {
  name: "loss"
  type: "EuclideanLoss"
  bottom: "sig1"
  bottom: "label"
  top: "loss"
}
I0319 09:41:09.485752  6909 layer_factory.hpp:77] Creating layer feed
I0319 09:41:09.485780  6909 net.cpp:84] Creating Layer feed
I0319 09:41:09.485792  6909 net.cpp:380] feed -> data
I0319 09:41:09.485819  6909 net.cpp:380] feed -> label
I0319 09:41:09.485836  6909 hdf5_data_layer.cpp:80] Loading list of HDF5 filenames from: train_h5_list.txt
I0319 09:41:09.485860  6909 hdf5_data_layer.cpp:94] Number of HDF5 files: 1
I0319 09:41:09.486469  6909 hdf5.cpp:32] Datatype class: H5T_FLOAT
I0319 09:41:09.500986  6909 net.cpp:122] Setting up feed
I0319 09:41:09.501011  6909 net.cpp:129] Top shape: 50 227 227 3 (7729350)
I0319 09:41:09.501027  6909 net.cpp:129] Top shape: 50 1 (50)
I0319 09:41:09.501039  6909 net.cpp:137] Memory required for data: 30917600
I0319 09:41:09.501051  6909 layer_factory.hpp:77] Creating layer conv1
I0319 09:41:09.501080  6909 net.cpp:84] Creating Layer conv1
I0319 09:41:09.501087  6909 net.cpp:406] conv1 <- data
I0319 09:41:09.501101  6909 net.cpp:380] conv1 -> conv1
I0319 09:41:09.880740  6909 net.cpp:122] Setting up conv1
I0319 09:41:09.880765  6909 net.cpp:129] Top shape: 50 1 225 1 (11250)
I0319 09:41:09.880781  6909 net.cpp:137] Memory required for data: 30962600
I0319 09:41:09.880808  6909 layer_factory.hpp:77] Creating layer pool1
I0319 09:41:09.880836  6909 net.cpp:84] Creating Layer pool1
I0319 09:41:09.880846  6909 net.cpp:406] pool1 <- conv1
I0319 09:41:09.880861  6909 net.cpp:380] pool1 -> pool1
I0319 09:41:09.880888  6909 net.cpp:122] Setting up pool1
I0319 09:41:09.880899  6909 net.cpp:129] Top shape: 50 1 224 0 (0)
I0319 09:41:09.880913  6909 net.cpp:137] Memory required for data: 30962600
I0319 09:41:09.880921  6909 layer_factory.hpp:77] Creating layer relu1
I0319 09:41:09.880934  6909 net.cpp:84] Creating Layer relu1
I0319 09:41:09.880941  6909 net.cpp:406] relu1 <- pool1
I0319 09:41:09.880952  6909 net.cpp:380] relu1 -> relu1
F0319 09:41:09.881192  6909 cudnn.hpp:80] Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0)  CUDNN_STATUS_BAD_PARAM

编辑:尝试将解算器模式设置为CPU,我仍然收到此错误。

2 个答案:

答案 0 :(得分:2)

我发现了其中一个问题。

I0319 09:41:09.880765  6909 net.cpp:129] Top shape: 50 1 225 1 (11250)
I0319 09:41:09.880781  6909 net.cpp:137] Memory required for data: 30962600
I0319 09:41:09.880808  6909 layer_factory.hpp:77] Creating layer pool1
I0319 09:41:09.880836  6909 net.cpp:84] Creating Layer pool1
I0319 09:41:09.880846  6909 net.cpp:406] pool1 <- conv1
I0319 09:41:09.880861  6909 net.cpp:380] pool1 -> pool1
I0319 09:41:09.880888  6909 net.cpp:122] Setting up pool1
I0319 09:41:09.880899  6909 net.cpp:129] Top shape: 50 1 224 0 (0)

正如您所看到的,第一个卷积层将采用大小的输入(50 227 227 3),这有点问题,因为他认为第二维包含通道。

唯一很自然的是,这个卷积层将简单地按照这种方式进行尺寸切割,现在没有更多的图层可以获得适当的输入尺寸。

我设法通过简单地以这种方式重塑输入来解决问题:

layer {
    name: "reshape"
    type: "Reshape"
    bottom: "data"
    top: "res"
    reshape_param {
      shape {
        dim: 50
        dim: 3
        dim: 227
        dim: 227
      }
    }
  }

这里的第一个维度是批量大小,所以无论谁读这个都必须记住在.prototxt文件中将此dim设置为1以进行分类阶段(因为那个人不能使用批次)

编辑:我会将此标记为答案,因为它涵盖了我遇到的问题的基本解决方案,并且没有其他解决方案可见。如果有人想要更多地了解这个问题,请这样做。

答案 1 :(得分:1)

之所以会引发此错误,是因为您没有更多的“收缩”空间。从您的错误信息:50 1 224 0(0) 这表明网络的大小在一维中为0。

要解决此错误,您可以调整一些参数,包括(S)潮汐,(K)内核尺寸和(P)添加。要计算下一层(W_new)的尺寸,可以使用以下公式:

W_new =(W_old-K + 2 * P)/ S +1

因此,如果我们的输入为227x227x3,并且第一层的K = 5,S = 2,P = 1,并且numOutputs = N,则conv1的尺寸为:

(227-5 + 2 * 1)/ 2 + 1 = 112x112xN。

注意:如果分子的结尾是奇数,则加1后四舍五入。

编辑:之所以出现在ReLU层上,可能是因为ReLU层没有任何可传递的内容,因此它会引发错误。