使用Y_True作为中间层

时间:2017-06-29 16:42:28

标签: neural-network keras lstm keras-layer loss-function

我正在尝试实现类似于以下框图的结构。我有能力从头开始实现它,但是当我想在Keras中实现它时,我遇到了一些困难。任何帮助,将不胜感激。具体来说,我有两个关于它在Keras中的实现的问题。

1)如何将实际输出作为单独的输入层,如下面的框图所示。当每个输入都输入网络时,我想在图中显示的Y_true部分中输入相应的金标准输出。
2)如果我想从成本部分反向传播成本函数,是否可以从垂直路径向后,而不是具有第三层副本的路径。

Overall block diagram of the Keras

2 个答案:

答案 0 :(得分:1)

请试试这个。主要思想是创建一个具有2个输出的模型,一个用于y_pred,另一个用于损失。在编译该模型时,使用损失函数列表,我们只关心第二次丢失

hw.gpu.enabled=yes
hw.gpu.mode=swiftshader

答案 1 :(得分:1)

我尝试了自定义丢失函数,这是可能的,但它比平时稍微复杂一些(我不知道培训是否会成功......):

import keras.backend as K

def customLoss(yTrue,yPred): 

    #starting with tensors shaped like (batch,5,3)

    #let's find the predicted class to compare - this example works with categorical classification (only one true class per element in a sequence)   
    trueMax = K.argmax(yTrue,axis=-1)
    predMax = K.argmax(yPred,axis=-1)
                #at this point, shapes become (batch,5)

    #let's find the different results:
    neq = K.not_equal(trueMax,predMax)

    #now we sum the different results. The ones with sum=0 are true
    neqsum = K.sum(neq,axis=-1)
                #shape now is only (batch)

    #to avoid false values being greater than 1, we do another comparison:
    trueFalse = K.equal(neqsum,0)

    #we adjust from values between 0 and 1 to values between -1 and 1:
    adj = (2*trueFalse) - 1

    #now it's time to create Loss1 and Loss2 (which I don't know)   
    #they are different from regular losses, because you must keep the batch size so you can multiply the result with "adj":

    l1 = someLoss keeping batch size   
    l2 = someLoss keeping batch size
              #these two must be shaped also like (batch)

    #then apply your formula:
    res = ((1-adj)*l1 + ((adj-1)*l2)
               #this step could perhaps be replaced by the K.switch function    
               #it would be probably much more efficient, but I'd have to learn how to use it first   

    #and finally, sum over the batch dimension, or use a mean value or anything similar
    return K.sum(res) #or K.mean(res)

测试(形状略有不同,但保持相同数量的尺寸):

def tprint(t):
    print(K.shape(t).eval())
    print(t.eval())
    print("\n")

x = np.array([[[.2,.7,.1],[.6,.3,.1],[.3,.3,.4],[.6,.3,.1],[.3,.6,.1]],[[.5,.2,.3],[.3,.6,.1],[.2,.7,.1],[.7,.15,.15],[.5,.2,.3]]])
y = np.array([[[0.,1.,0.],[1.,0.,0.],[0.,0.,1.],[1.,0.,0.],[0.,1.,0.]],[[0.,1.,0.],[0.,0.,1.],[0.,1.,0.],[1.,00.,00.],[1.,0.,0.]]])


x = K.variable(x)
y = K.variable(y)

xM = K.argmax(x,axis=-1)
yM = K.argmax(y,axis=-1)

neq = K.not_equal(xM,yM)

neqsum = K.sum(neq,axis=-1,keepdims=False)
trueFalse = K.equal(neqsum,0)
adj = (2*trueFalse) - 1

l1 = 3 * K.sum(K.sum(y,axis=-1),axis=-1)
l2 = 7 * K.sum(K.sum(y,axis=-1),axis=-1)

res = ((1-adj)*l1) +((adj-1)*l2)
sumres = K.sum(res) #or K.mean, or something similar
tprint(xM)
tprint(yM)
tprint(neq)
tprint(neqsum)
tprint(trueFalse)
tprint(adj)
tprint(l1)
tprint(l2)
tprint(res)