是否可以重写此分拆操作,以便Keras可以计算导数?

时间:2018-08-20 11:10:52

标签: machine-learning neural-network keras

我正在尝试使用带有TensorFlow后端的Keras创建一个非池化层。 this paper中描述了我要实现的解池操作。这是SegNet所使用的相同的分池操作。

  

解池:在convnet中,最大池化操作   是不可逆的,但是我们可以获得一个近似值   通过记录   一组开关中每个池化区域内的最大值   变量。在deconvnet中,分拆操作   使用这些开关从   将上面的图层放到适当的位置,保留   刺激的结构。

我的大部分代码都是对this implementation的改编,来自旧版的Keras。

到目前为止,我已经编写了一个自定义层,它正在正确执行卸载操作。我遇到的问题是当Keras尝试在反向传播阶段计算梯度时,出现错误:

raise ValueError('An operation has `None` for gradient. '
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

据我了解,此错误是由于我必须使用Keras不知道其导数的操作引起的。

这是我的代码:

from keras import backend as K
from keras.engine.topology import Layer
from keras.layers import UpSampling2D
import numpy as np
import operator

class MyUnpooler(Layer):

    # Initialisations
    def __init__(self, pool_layer, pool_input, size=(2,2), **kwargs):
        self.pool_layer = pool_layer
        self.pool_input = pool_input
        self.size = size
        super(MyUnpooler, self).__init__(**kwargs)

    # This method would be used to create weights
    # I don't have any trainable weights but this must be implemented
    def build(self, input_shape):
        super(MyUnpooler, self).build(input_shape)

    # This method is for the layers' logic
    # x is always input to the layer
    def call(self, x):
        # Zeros for later
        zeros = K.zeros_like(self.pool_input)

        # Mask template
        upsampled = UpSampling2D(size=self.size)(self.pool_layer)
        upsampled_shape = upsampled.get_shape().as_list()[1:]
        input_shape = self.pool_input.get_shape().as_list()[1:]

        size_diff = map(operator.sub, input_shape, upsampled_shape)
        unfiltered_mask = K.spatial_2d_padding(upsampled, padding=((0,size_diff[1]),(0,size_diff[2])))

        # Create the mask
        self.mask = K.tf.equal(self.pool_input, unfiltered_mask)
        assert self.mask.get_shape().as_list() == self.pool_input.get_shape().as_list()

        self.unpooled = K.tf.where(self.mask, self.pool_input, zeros)
        return K.tf.to_float(self.unpooled)

    def compute_output_shape(self, input_shape):
        # input_shape is not actually the input shape we need...
        # We need to be able to UpSample the layer to calculate the dimensions
        upsampled = UpSampling2D(size=self.size)(self.pool_layer)
        upsampled_shape = upsampled.get_shape().as_list()[1:]
        inp_shape = self.pool_input.get_shape().as_list()[1:]

        size_diff = map(operator.sub, inp_shape, upsampled_shape)
        unf_mask = K.spatial_2d_padding(upsampled, padding=((0,size_diff[1]),(0,size_diff[2])))

        return tuple(unf_mask.get_shape().as_list())

如果有更好的方法来进行分池操作,那么我也可以完全忽略到目前为止的尝试。

0 个答案:

没有答案