试图了解caffe中的自定义损失层

时间:2017-06-21 11:12:07

标签: python neural-network deep-learning caffe pycaffe

我已经看到可以定义一个自定义丢失层,例如像这样的cacl中的EuclideanLoss:

import caffe
import numpy as np


    class EuclideanLossLayer(caffe.Layer):
        """
        Compute the Euclidean Loss in the same manner as the C++ 
EuclideanLossLayer
        to demonstrate the class interface for developing layers in Python.
        """

        def setup(self, bottom, top):
            # check input pair
            if len(bottom) != 2:
                raise Exception("Need two inputs to compute distance.")

        def reshape(self, bottom, top):
            # check input dimensions match
            if bottom[0].count != bottom[1].count:
                raise Exception("Inputs must have the same dimension.")
            # difference is shape of inputs
            self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
            # loss output is scalar
            top[0].reshape(1)

        def forward(self, bottom, top):
            self.diff[...] = bottom[0].data - bottom[1].data
            top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

        def backward(self, top, propagate_down, bottom):
            for i in range(2):
                if not propagate_down[i]:
                    continue
                if i == 0:
                    sign = 1
                else:
                    sign = -1
                bottom[i].diff[...] = sign * self.diff / bottom[i].num

但是,我对该代码有一些疑问:

如果我想自定义此图层并更改此行中的损失计算:

top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

让我们说:

channelsAxis = bottom[0].data.shape[1]
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis)
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

如何更改后退功能?对于EuclideanLoss来说,它是:

bottom[i].diff[...] = sign * self.diff / bottom[i].num

如何查找我描述的损失?

有什么标志?

1 个答案:

答案 0 :(得分:2)

虽然将loss you are after作为"Python"图层实施可能是一项非常有教育意义的练习,但您可以使用现有图层获得相同的损失。您只需在调用常规"EuclideanLoss"图层之前为每个blob添加"Reduction"图层:

layer {
  type: "Reduction"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "Reduction"
  name: "rx2"
  bottom: "x2"
  top: "rx2"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "EuclideanLoss"
  name: "loss"
  bottom: "rx1"
  bottom: "rx2"
  top: "loss"
}

<强>更新
基于your comment,如果您只想对渠道维度求和并保持所有其他维度不变,则可以使用固定的1x1转化(如您所建议):

layer {
  type: "Convolution"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  param { lr_mult: 0 decay_mult: 0 } # make this layer *fixed*
  convolution_param {
    num_output: 1
    kernel_size: 1
    bias_term: 0  # no need for bias
    weight_filler: { type: "constant" value: 1 } # sum
  }
}