在Keras中将2D max子数组函数实现为自定义损失函数

时间:2019-06-28 14:56:03

标签: python arrays tensorflow keras cython

我正在尝试在Keras(Tensorflow后端)中实现自定义损失功能。

我的目标是创建一个损失函数,其大小为y_pred(150、200、1)(即具有1个通道的150x200图像),取其与相应张量y_true之间的差,然后扫描所得的“差”所有可能尺寸的子数组的数组,其总和具有最大绝对值(二维最大子数组问题)。然后,该函数应输出该子数组之和的绝对值作为损失(浮点数)。 (我正在尝试根据本文https://www.robots.ox.ac.uk/~vgg/publications/2010/Lempitsky10b/lempitsky10b.pdf的“ MESA”算法对该函数进行建模)

我一直在尝试阅读Keras中的自定义损失函数,并且我了解到必须在Keras函数空间中编写损失函数。虽然我目前有我的损失函数的Cython优化版本,但我不太了解如何将其转换为Keras友好版本。我的损失函数的主要基础代码如下所示。

#The loss function as defined in my code
def MESA(y_true, y_pred):
    diff = y_true - y_pred
    diff = K.eval(diff)
    result = CythonMESA.MaxSubArray2D(diff)
    result = np.array([result])
    result = K.variable(result)
    return result

model.compile(
    loss=MESA,
    optimizer='adam',
    metrics=['accuracy']
)

“ CythonMESA”模块包含一些经过Cython优化的功能,这些功能我已在下面进行了介绍。具体来说,“ CythonMESA.MaxSubArray2D”函数将2D数组作为输入(例如2D np.ndarray对象)并输出一个double。

#Contents of CythonMESA.pyx

import numpy as np
cimport cython

@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)

#a helper function that is called within the main function below
#this function computes the maximum sum subarray in a 1D array using Kadane's algorithm
cdef double KadaneAbsoluteValue(double [:] array):
    cdef int length = int(array.shape[0])
    cdef double[:] maxSums = np.zeros(length, np.float64)
    cdef double kadaneMax
    cdef int i
    for i in range(length):
        if i == 0:
            maxSums[0] = array[0]
            kadaneMax = abs(maxSums[0])
        else:
            if abs(array[i]) >= abs(array[i] + maxSums[i-1]):
                maxSums[i] = array[i]
            else:
                maxSums[i] = array[i] + maxSums[i-1]
            if abs(maxSums[i]) > kadaneMax:
                kadaneMax = abs(maxSums[i])
    return kadaneMax

#The main basis for the loss function
#Loops through a 2D array and uses the function above to compute maximum subarray
cpdef double MaxSubArray2D(double [:,:] array):
    cdef double maxSum = 0.
    cdef double currentSum
    cdef int height = int(array.shape[0])
    cdef int width = int(array.shape[1])
    cdef int i, j
    cdef double [:] tempArray
    if height >= width:
        for i in range(width):
            for j in range(i,width):
                tempArray = np.sum(array[:,i:j+1], axis=1)
                currentSum = KadaneAbsoluteValue(tempArray)
                if currentSum > maxSum:
                    maxSum = currentSum
    else:
        for i in range(height):
            for j in range(i, height):
                tempArray = np.sum(array[i:j+1,:], axis=0)
                currentSum = KadaneAbsoluteValue(tempArray)
                if currentSum > maxSum:
                    maxSum = currentSum
    return maxSum

我实际上已经尝试使用上述功能直接在Keras中编译网络,但是正如预期的那样,它会引发错误。

如果有人能指出正确的方向,让我知道在哪里可以将其转换为Keras友好的函数,等等。我将不胜感激!

1 个答案:

答案 0 :(得分:0)

将1个过滤器与全1进行简单的卷积,然后进行maxpooling即可。

subArrayX = 3
subArrayY = 3
inputChannels = 1
outputChannels = 1
convFilter = K.ones((subArrayX, subArrayY, inputChannels, outputChannels))

def local_loss(true, pred):

    diff = K.abs(true-pred) #you might also try K.square instead of abs

    localSums = K.conv2d(diff, convFilter)
    localSums = K.batch_flatten(localSums) 
        #if using more than 1 channel, you might want a different thing here

    return K.max(localSums, axis=-1)


model.compile(loss = local_loss, ....)

对于所有可能的形状:

convWeights = []
for i in range(1, maxWidth+1):
    for j in range(1, maxHeight+1):
        convWeights.append(K.ones((i,j,1,1)))

def custom_loss(true,pred):

    diff = true - pred

    #sums for each array size
    sums = [K.conv2d(diff, w) for w in convWeights]

    # I didn't understand if you want the max abs sum or abs of max sum
    # add this line depending on the answer:
    sums = [K.abs(s) for s in sums] 

    #get the max sum for each array size
    sums = [K.batch_flatten(s) for s in sums]
    sums = [K.max(s, axis=-1) for s in sums]

    #global sums for all sizes
    sums = K.stack(sums, axis=-1)
    sums = K.max(sums, axis=-1)

    return K.abs(sums)

尝试类似于Kadane的方法(分开尺寸)

让我们在单独的维度中执行此操作

if height >= width:
    convFilters1 = [K.ones((1, i, 1, 1)) for i in range(1,width+1)]
    convFilters2 = [K.ones((i, 1, 1, 1) for i in range(1,height+1)]
    concatDim1 = 2
    concatDim2 = 1
else:
    convFilters1 = [K.ones((i, 1, 1, 1)) for i in range(1,height+1)]
    convFilters2 = [K.ones((1, i, 1, 1) for i in range(1,width+1)]
    concatDim1 = 1
    concatDim2 = 2


def custom_loss_2_step(true,pred):
    diff = true-pred #shape (samp, h, w, 1)

    sums = [K.conv2d(diff, f) for f in convFilters1] #(samp, h, var, 1) 
                                                     #(samp, var, w, 1)    
    sums = K.concatenate(sums, axis=concatDim1) #(samp, h, superW, 1)
                                                #(samp, superH, w, 1)
    sums = [K.conv2d(sums, f) for f in convFilters2] #(samp, var, superW, 1)
                                                     #(samp, superH, var, 1)
    sums = K.concatenate(sums, axis=concatDim2) #(samp, superH, superW, 1)
    sums = K.batch_flatten(sums) #(samp, allSums)

    #??? sums = K.abs(sums)
    maxSum = K.max(sums, axis-1) #(samp,)
    #??? maxSum = K.abs(maxSum)

    return maxSum