如何使用Caffe近似简单的加法?

时间:2018-07-17 11:20:18

标签: python neural-network conv-neural-network caffe pycaffe

我正在尝试使用caffe的非常简单的方案,但未获得预期的输出。这是我用来训练模型的代码:

solver = caffe.get_solver(solver_file)
net = solver.net
#sys.exit(0)

size = 100000
for i in xrange(size):
    x1 = float(random.randint(0,1000))
    x2 = float(random.randint(0,1000))
    y = float(x1 + x2 + 1000)
    a = np.zeros(shape=(1,1,2))
    a[0][0][0] = x1
    a[0][0][1] = x2

    b = np.zeros(shape=(1,1,1))
    b[0][0][0] = y

    solver.net.blobs['data'].data[...] = a
    solver.net.blobs['fc'].data[...] = b
    solver.step(1)

solver.net.save(pretrained_file)

这是Solver.prototxt:

net: "train_val.prototxt"
base_lr: 0.001
lr_policy: "step"
gamma: 0.0001
#power: 0.75 
test_interval: 2500
test_iter: 100
stepsize: 200000
display: 200
max_iter: 40000
momentum: 0.9
weight_decay: 0.0005
snapshot: 0
#test_initialization: false
snapshot_prefix: "test_model"
#snapshot_after_train: false
solver_mode: GPU

这是train_val.prototxt

name: "test1"
layer {
  name: "train_data"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include {
     phase: TRAIN
  }

  hdf5_data_param {
    source: "train_path.txt"
    shuffle: true
    batch_size: 10
  }
}


layer {
  name: "train_data"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include {
     phase: TEST
  }

  hdf5_data_param {
    source: "test_path.txt"
    batch_size: 1
  }
}



layer {
  name: "fc1"
  type: "InnerProduct"
  bottom: "data"
  top: "fc1"
  inner_product_param {
    num_output: 16
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.1
    }
  }
}

layer {
  bottom: "fc1"
  top: "fc1"
  name: "relu1"
  type: "ReLU"
}

layer {
  name: "fc2"
  type: "InnerProduct"
  bottom: "fc1"
  top: "fc2"
  inner_product_param {
    num_output: 16
    weight_filler {
      type: "gaussian"
      std: 0.01
    }

    bias_filler {
      type: "constant"
      value: 0.1
    }

  }
}

layer {
  bottom: "fc2"
  top: "fc2"
  name: "relu2"
  type: "ReLU"
}

layer {
  name: "fc"
  type: "InnerProduct"
  bottom: "fc2"
  top: "fc"
  inner_product_param {
    num_output: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }

    bias_filler {
      type: "constant"
      value: 0.1
    }


  }
}



layer {
  name: "loss"
  type: "EuclideanLoss"
  bottom: "fc"
  bottom: "label"
  top: "loss"
}

请注意,我创建了Hdf5Data图层,但在训练时未使用它(我直接输入网络)。我还尝试过仅使用Hdf5Data图层,但这没有什么不同。

经过100000次迭代,损失仍然非常巨大:

I0717 20:04:24.367807 28873 solver.cpp:219] Iteration 99000 (5882.35 iter/s, 0.034s/200 iters), loss = 74512.1
I0717 20:04:24.367841 28873 solver.cpp:238]     Train net output #0: loss = 74512.1 (* 1 = 74512.1 loss)
I0717 20:04:24.367856 28873 sgd_solver.cpp:105] Iteration 99000, lr = 0.001
I0717 20:04:24.402856 28873 solver.cpp:219] Iteration 99200 (5714.29 iter/s, 0.035s/200 iters), loss = 80065
I0717 20:04:24.402890 28873 solver.cpp:238]     Train net output #0: loss = 80065 (* 1 = 80065 loss)
I0717 20:04:24.402905 28873 sgd_solver.cpp:105] Iteration 99200, lr = 0.001
I0717 20:04:24.437564 28873 solver.cpp:219] Iteration 99400 (5882.35 iter/s, 0.034s/200 iters), loss = 90577.7
I0717 20:04:24.437598 28873 solver.cpp:238]     Train net output #0: loss = 90577.7 (* 1 = 90577.7 loss)
I0717 20:04:24.437613 28873 sgd_solver.cpp:105] Iteration 99400, lr = 0.001
I0717 20:04:24.472441 28873 solver.cpp:219] Iteration 99600 (5882.35 iter/s, 0.034s/200 iters), loss = 116119
I0717 20:04:24.472476 28873 solver.cpp:238]     Train net output #0: loss = 116119 (* 1 = 116119 loss)
I0717 20:04:24.472491 28873 sgd_solver.cpp:105] Iteration 99600, lr = 0.001
I0717 20:04:24.507279 28873 solver.cpp:219] Iteration 99800 (5882.35 iter/s, 0.034s/200 iters), loss = 55599.9
I0717 20:04:24.507313 28873 solver.cpp:238]     Train net output #0: loss = 55599.9 (* 1 = 55599.9 loss)
I0717 20:04:24.507328 28873 sgd_solver.cpp:105] Iteration 99800, lr = 0.001

我做了一个简单的测试,以查看网络的输出:

def get_output(net, in_):
    out = net.forward(data=in_)
    out_ = out[net.outputs[0]]
    return out_

def main(args):
    model_file = "deploy.prototxt"
    net = caffe.Net(model_file, pretrained_file, caffe.TEST)

    ins_ = []
    in1_ = np.zeros((1, 1, 2), dtype=np.float32)
    in1_[0][0][0] = 14
    in1_[0][0][1] = 77
    ins_.append(in1_)

    in2_ = np.zeros((1, 1, 2), dtype=np.float32)
    in2_[0][0][0] = 100
    in2_[0][0][1] = 200
    ins_.append(in2_)

    outs = get_outputs(net, ins_)
    print outs[0]
    print outs[1]

deploy.prototxt在这里:

name: "test1"
input: "data"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 2


layer {
  name: "fc1"
  type: "InnerProduct"
  bottom: "data"
  top: "fc1"
  inner_product_param {
    num_output: 16
  }
}

layer {
  bottom: "fc1"
  top: "fc1"
  name: "relu1"
  type: "ReLU"
}


layer {
  name: "fc2"
  type: "InnerProduct"
  bottom: "fc1"
  top: "fc2"
  inner_product_param {
    num_output: 16

  }
}

layer {
  bottom: "fc2"
  top: "fc2"
  name: "relu2"
  type: "ReLU"
}


layer {
  name: "fc"
  type: "InnerProduct"
  bottom: "fc2"
  top: "fc"
  inner_product_param {
    num_output: 1



  }
}

输出为:

[[[ 14.  77.]]]
[[[ 100.  200.]]]
[[ 1981.43566895]]
[[ 1981.43566895]]

很奇怪,两个不同的输入给出完全相同的输出。当我查看net.blobs ['fc1']时,全为零。这就解释了两者给出的输出完全相同。

fc1层的权重看起来很正常:

>>> net.params['fc1'][0].data
array([[ -1.78788041e+12,  -1.80693867e+12],
       [ -4.77027521e-03,  -1.57534312e-02],
       [ -2.85808116e-01,  -1.03346086e+00],
       [ -4.11485549e+12,  -4.15872765e+12],
       [ -3.97690558e+00,  -6.56769896e+00],
       [ -8.34569149e-03,  -1.17674023e-02],
       [ -2.66584531e+13,  -2.69423382e+13],
       [ -2.10294223e+00,  -2.69863915e+00],
       [ -1.29736237e+11,  -1.31118014e+11],
       [ -2.26515858e-03,  -7.97832850e-03],
       [ -2.58903159e-03,  -1.29228756e-02],
       [ -1.61415696e+00,  -2.00974083e+00],
       [ -7.97582499e-04,  -3.23565165e-03],
       [ -9.69606861e-02,  -5.83252907e-01],
       [ -3.16640830e+00,  -4.80933285e+00],
       [ -1.10790539e+00,  -1.99139881e+00]], dtype=float32)

我错过了什么来正确地训练这个网络?

0 个答案:

没有答案