使用Keras的自定义层:是否可以在基于零的softmax层的输出中将输出神经元设置为0作为输入层中的数据?

时间:2018-12-22 20:10:29

标签: backend keras-layer softmax

我有一个神经网络,使用softmax激活(soft_out)在最后一层具有13个输出神经元。我也完全知道,基于输入值,输出层中的某些神经元应具有0值。因此,我有一个特殊的输入层(inp),包含13个神经元,每个神经元为0或1。

以某种方式强迫我们说输出神经元为no。如果输入的3号神经元设置为1,则3的值等于0?

除此之外,它还必须充当softmax层,因此最后神经元的总和必须为1。因此,必须更正输出行。

步骤如下: 1.清除soft_out神经元,其中inp神经元== 1 2.计算soft_out中的行总和 3.检查总和为0的哪一行 4.将sum为0的行中的soft_out校正为任意常数值 5.再次计算soft_out中的行总和 6.检查总和为0的哪一行并将其设置为1 7.每行返回soft_out /总和(因此将输出调整为每行总和= 1)

使用numpy的步骤如下: 输入数据

inp = np.array((5,13))
inp = np.random.choice([0, 1], size=5*13, p=[.5, .5])
inp = inp.reshape(5,13)
soft_out=np.around(np.random.random_sample((5,13)),2)
inp [3,:]=1
inp [4,:]=1
inp [4,12]=0
soft_out[4,12]=0
print ("inp",inp,"\n")
print ("soft_out",soft_out,"\n")
  

inp [[1 1 0 0 1 1 1 1 1 1 0 0 0] [0 0 0 0 1 0 0 0 1 1 0 0 1] [1 0 1   1 0 0 0 1 0 0 0 0 1] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] [1 1 1 1 1 1 1 1 1 1   1 1 1 0]]

     

soft_out [[0.8 0.16 0.42 0.44 0.67 0.39 0.38 0.54 0.75 0.06 0.62 0.67   0.86] [0.87 0.28 0.51 0.92 0.89 0.97 0.1 0.17 0.73 0.43 0.84 0.96 0.57] [0.16 0.33 0.62 0.37 0.42 0.54 0.1 0.54 0.92 0.51 0.89 0.86 0.96] [0.53 0.59 0.6 0.63 0.57 0.95 0.41 0.1 0.32 0.81 0.87 0.35 0.16] [0.13 0.57] 0.92 0.87 0.82 0.08 0.74 0.78 0.2 0.22 0.64 0.06 0。]]

#0. find out where inp is set to 1 and to 0
mask_nonzero=np.where(inp != 0 )
print ("mask_nonzero", mask_nonzero,"\n")

mask_zero=np.where(inp == 0 )
print ("mask_zero", mask_zero,"\n")

#1. clear those values where inp is 1
soft_out[mask_nonzero]=0
print ("soft_out", soft_out,"\n")

#2. calculate the sum of the rows
row_sum_soft_out = np.sum(soft_out,axis=-1)
print ("row_sum_soft_out", row_sum_soft_out,"\n")

# 3. reshape in order to find out rows where the sum is zero >> this means that the soft_out values have to be corrected
row_sum_soft_out = row_sum_soft_out.reshape(5,1)
print ("row_sum_soft_out", row_sum_soft_out,"\n")

mask_sum_zero = np.where(row_sum_soft_out == 0 )
soft_out[mask_sum_zero[0]] = 1
print ("soft_out", soft_out,"\n")

print ("mask_sum_zero", mask_sum_zero,"\n")
soft_out[mask_nonzero]=0

# correct soft_out in the rows where sum is 0 to an arbitrary constant value
row_sum_soft_out = np.sum(soft_out,axis=-1)

#5. calculate sum of the rows in soft_out again
mask_sum_zero = np.where(row_sum_soft_out == 0 )

#6. check in which row where the sum is 0 and set it to 1
row_sum_soft_out[mask_sum_zero] = 1

row_sum_soft_out = row_sum_soft_out.reshape(5,1)
#7. return soft_out / sum per each row (so adjust the output to have sum =1 per row)
y = soft_out / row_sum_soft_out
print ("soft_out", y)
print (np.sum(y,axis=-1),"\n")
  

mask_nonzero(array([0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,2,2,   3、3、3、3、3,          [3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,4],         dtype = int64),数组([0,1,4,5,6,7,8,9,4,8,9,12,0,2,3,7,12,           0、1、2、3、4、5、6、7、8、9、10、11、12、0、1、2、3,           4,5,6,7,8,9,10,11],dtype = int64))

     

mask_zero(array([0,0,0,0,0,1,1,1,1,1,1,1,1,1,2,2,2,   2,2,2,2,2,          4],dtype = int64),array([2,3,10,11,12,0,1,2,3,5,6,7,10,11,1,4,4,5,           6,8,9,10,11,12],dtype = int64))

     

soft_out [[0。 0. 0.42 0.44 0. 0. 0. 0. 0. 0.62 0.67   0.86] [0.87 0.28 0.51 0.92 0. 0.97 0.1 0.17 0. 0. 0.84 0.96 0.] [0。 0.33 0. 0. 0.42 0.54 0.1 0. 0.92 0.51 0.89 0.86 0.]   [0。 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]   [0。 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

     

row_sum_soft_out [3.01 5.62 4.57 0. 0.]

     

row_sum_soft_out [[3.01] [5.62] [4.57] [0。 ] [0。 ]]

     

soft_out [[0。 0. 0.42 0.44 0. 0. 0. 0. 0. 0.62 0.67   0.86] [0.87 0.28 0.51 0.92 0. 0.97 0.1 0.17 0. 0. 0.84 0.96 0.] [0。 0.33 0. 0. 0.42 0.54 0.1 0. 0.92 0.51 0.89 0.86 0.]   [1。 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]   [1。 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]

     

mask_sum_zero(array([3,4],dtype = int64),array([0,0],dtype = int64))

     

soft_out [[0。 0. 0.13953488 0.1461794 0. 0     0. 0. 0. 0. 0.20598007 0.22259136     0.28571429] [0.15480427 0.04982206 0.09074733 0.16370107 0. 0.17259786     0.01779359 0.03024911 0. 0. 0.14946619 0.17081851     0.] [0。 0.07221007 0. 0. 0.09190372 0.11816193     0.02188184 0. 0.20131291 0.11159737 0.19474836 0.18818381     0.] [0。 0. 0. 0. 0. 0。     0. 0. 0. 0. 0. 0。     0.] [0。 0. 0. 0. 0. 0。     0. 0. 0. 0. 0. 0。     1.]] [1。 1. 1. 0. 1。]

有人可以帮忙写KERAS后端层吗?

1 个答案:

答案 0 :(得分:0)

我最后想出了这段代码...应该根据需要更改形状(shape =(32,13))

from keras import backend as K
import tensorflow as tf
def mask_output2(x):
    # soft_out is the output of the previous softmax layer, shape =(batch_size, 13) in my case
    # inp is the tensor containing 0 and 1-s, where 1 means that the action was already used, shape=(batch_size, 13)
    inp, soft_out = x
    # add a very small value in order to avoid having 0 everywhere
    c = K.constant(0.0000001, dtype='float32', shape=(32, 13))
    y = soft_out + c
    # clear invalid actions' values
    y = Lambda(lambda x: K.switch(K.equal(x[0],0), x[1], K.zeros_like(x[1])))([inp, soft_out])
    y_sum =  K.sum(y, axis=-1)
    # correct sum if it is 0 to avoid dividing by zero
    y_sum_corrected = Lambda(lambda x: K.switch(K.equal(x[0],0), K.ones_like(x[0]), x[0] ))([y_sum])
    # reshape output in order to have sum 1 per row, so first calculate 1/sum
    y_sum_corrected = tf.divide(1,y_sum_corrected)
    # multiply tensor (32,13) with tensor (32)
    y = tf.einsum('ij,i->ij', y, y_sum_corrected)
    return y