多输出Keras模型中的自定义指标

时间:2019-12-19 18:40:49

标签: tensorflow keras deep-learning conv-neural-network vgg-net

我正在处理自定义的多位数数字识别问题。我只有1位和2位数字。我正在使用VGG16模型,每个数字分别带有两个头部,以避免具有100个类。

该模型如下所示:

input_shape = (256,96,3)
base_model = VGG16(weights='imagenet', include_top=False, input_shape = input_shape)
xo = base_model.output
x = base_model.input
flat = Flatten(name = 'flat')(xo)
h1 = Dense(1024, activation='relu', name = 'first_hidden_layer')(flat)
d1 = Dropout(0.5, name = 'first_hidden_dropout')(h1)
h2 = Dense(1024, activation='relu', name = 'second_hidden_layer')(d1)
d2 = Dropout(0.5, name = 'second_hidden_dropout')(h2)

o_digit1 = Dense(11, activation='softmax', name = 'digit1_classification')(d2)
o_digit2 = Dense(11, activation='softmax', name = 'digit2_classification')(d2)

model = Model(inputs = x, outputs = [o_digit1, o_digit2] )
opt = Adam(lr=0.0001)
model.compile(optimizer=opt, 
              loss='categorical_crossentropy',
              metrics={'digit1_classification': 'accuracy', 
                             'digit2_classification': 'accuracy'},
              loss_weights = {'digit1_classification': 0.5, 
                             'digit2_classification': 0.5})

我想建立一个自定义指标以传递给模型。计算实际数字精度的编译。通常,当您构建自己的度量标准函数时,会向其传递y_predy_true。 例如

def my_metric1(y_true,y_pred):    
    return calculations (y_true, y_pred)

我可以使用my_metric1分别计算每个班级我想要的东西,但是我想要的是计算实际的全数字精度。 这类东西:

def my_metric2(y_pred1, y_true1, y_pred2, y_true2):
    return calculations2(y_pred1, y_true1, y_pred2, y_true2)

这里y_pred1, y_true1, y_pred2, y_true2是每个数字的预测和真实值。

我该如何实现?

1 个答案:

答案 0 :(得分:0)

如果您使用2位数字作为单独的输出,则可以创建类似这样的自定义损失函数-

import numpy as np

# Define custom loss
# Create a loss function that adds the MSE loss to the mean of all squared activations
def custom_loss(y_pred1, y_true1, y_pred2, y_true2):
    Loss = 0

    def loss1(y_true1,y_pred1):
        return np.square(np.subtract(y_true1,y_pred1)).mean()

    def loss2(y_true2,y_pred2):
        return np.square(np.subtract(y_true2,y_pred2)).mean()

    def finalloss(y_pred1, y_true1, y_pred2, y_true2):
        Loss = loss1(y_pred1, y_true1) + loss2(y_pred2, y_true2)
        if(y_pred1 == y_true1 and y_pred2 == y_true2):
           return(0)
        elif(y_pred1 == y_true1 and y_pred2 != y_true2):
            return(0.5 * Loss)
        elif(y_pred1 != y_true1 and y_pred2 == y_true2):
            return(0.5 * Loss)    
        else:
            return(Loss)

    return finalloss(y_pred1, y_true1, y_pred2, y_true2)

# Compile the model
model.compile(optimizer='adam',
              loss=custom_loss, # Call the loss function 
              metrics=['accuracy'])

# train
model.fit(data, labels)  

更好的是在模型中两个数字都只有一个输出。完全连接[1x1x2]层之类的东西。您可以找到多位数检测器文章here