用于在keras中调用的自定义宏

时间:2018-02-12 09:00:07

标签: tensorflow machine-learning keras backpropagation precision-recall

我正在尝试为recall = (recall of class1 + recall of class2)/2创建自定义宏。我提出了以下代码,但我不确定如何计算0级的真正积极性。

def unweightedRecall():
    def recall(y_true, y_pred):
        # recall of class 1
        true_positives1 = K.sum(K.round(K.clip(y_pred * y_true, 0, 1)))
        possible_positives1 = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall1 = true_positives1 / (possible_positives1 + K.epsilon())

        # --- get true positive of class 0 in true_positives0 here ---
        # Also, is there a cleaner way to get possible_positives0
        possible_positives0 = K.int_shape(y_true)[0] - possible_positives1
        recall0 = true_positives0 / (possible_positives0 + K.epsilon())
        return (recall0 + recall1)/2
    return recall

我似乎必须使用Keras.backend.equal(x, y),但如何创建一个形状为K.int_shape(y_true)[0]的张量和所有值,比如x?

编辑1

根据Marcin的评论,我想根据keras中的回调创建自定义指标。在browsing issues in Keras期间,我遇到了f1指标的以下代码:

class Metrics(keras.callbacks.Callback):
    def on_epoch_end(self, batch, logs={}):
        predict = np.asarray(self.model.predict(self.validation_data[0]))
        targ = self.validation_data[1]
        self.f1s=f1(targ, predict)
        return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test], 
       verbose=1, callbacks=[metrics])

但回调如何恢复准确性?我想实施unweighted recall = (recall class1 + recall class2)/2。我可以想到以下代码,但希望有任何帮助来完成它

from sklearn.metrics import recall_score
class Metrics(keras.callbacks.Callback):
    def on_epoch_end(self, batch, logs={}):
        predict = np.asarray(self.model.predict(self.validation_data[0]))
        targ = self.validation_data[1]
        # --- what to store the result in?? ---
        self.XXXX=recall_score(targ, predict, average='macro')
        # we really dont need to return anything ??
        return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test], 
       verbose=1, callbacks=[metrics])

编辑2:模型:

def createModelHelper(numNeurons=40, optimizer='adam'):
    inputLayer = Input(shape=(data.shape[1],))
    denseLayer1 = Dense(numNeurons)(inputLayer)
    outputLayer = Dense(1, activation='sigmoid')(denseLayer1)
    model = Model(input=inputLayer, output=outputLayer)
    model.compile(loss=unweightedRecall, optimizer=optimizer)
    return model

1 个答案:

答案 0 :(得分:3)

keras版本(平均问题)。

你的两个班级实际上只有一个维度输出(0或1)吗?

如果是这样的话:

def recall(y_true, y_pred):
    # recall of class 1

    #do not use "round" here if you're going to use this as a loss function
    true_positives = K.sum(K.round(y_pred) * y_true)
    possible_positives = K.sum(y_true)
    return true_positives / (possible_positives + K.epsilon())


def unweightedRecall(y_true, y_pred):
    return (recall(y_true,y_pred) + recall(1-y_true,1-y_pred))/2.

现在,如果你的两个类实际上是一个2元素输出:

def unweightedRecall(y_true, y_pred):
    return (recall(y_true[:,0],y_pred[:,0]) + recall(y_true[:,1],y_pred[:,1]))/2.

回调版

对于回调,您可以使用LambdaCallback,然后手动打印或存储结果:

myCallBack = LambdaCallback(on_epoch_end=unweightedRecall)
stored_metrics = []

def unweightedRecall(epoch,logs):
    predict = model.predict(self.validation_data[0])
    targ = self.validation_data[1]

    result = (recall(targ,predict) + recall(1-targ,1-predict))/2. 
    print("recall for epoch " + str(epoch) + ": " + str(result))
    stored_metrics.append(result)

其中recall是使用np而不是K的函数。并epsilon = np.finfo(float).epsepsilon = np.finfo(np.float32).eps)