为什么在Keras神经网络中计算真实阳性率会有不同的结果?

时间:2018-11-08 13:22:49

标签: python callback keras neural-network metrics

我正在使用Python Keras软件包训练神经网络。我关心的是True Positive汇率,因此将其添加到了Callbacks and Metrics。令人惊讶的是,使用相同的公式会得到不同的结果(回调显示81%,这是正确的:加入Labels和Predictions后,我可以手动看到相同的内容;指标显示的较高,大约为86%)。有什么事?对代码的任何评论也将受到赞赏

def sensitivity(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true, 0, 1)) * K.round(K.clip(y_pred, 0, 1)))
    possible_positives = K.sum(y_true)
    return true_positives / (possible_positives + K.epsilon())
....
def calculate_rates(model, data, label):
    num_positive_prediction = np.sum(label)
    prediction = np.round(np.clip(model.predict(data, batch_size = 1024)[:,1], 0, 1))
    true_positive = np.sum(np.multiply(prediction, label)) / num_positive_prediction
    return(true_positive)
....
class TestCallback(keras.callbacks.Callback):
    def __init__(self, is_train, data, labels):
        self.data = data
        self.labels = labels[:,1]
        self.is_train = is_train

    def on_epoch_end(self, epoch, logs={}):
        true_positive = calculate_rates(model, self.data, self.labels)
        if (epoch + 1) % 10 == 0 or epoch == 0:
            if self.is_train:
                print("Epoch: %d" % epoch + 1)
                print("Training Set:")
            else:
                print("Testing Set:")
            print("True Positive Rate: %4g" % true_positive)
....
model = keras.Sequential()
my_init = keras.initializers.RandomNormal(stddev=0.1)
model.add(keras.layers.Dense(units=200, activation='relu', input_dim=num_variables))
model.add(keras.layers.Dropout(dropout_rate))
model.add(keras.layers.Dense(units = 200, activation = 'relu', kernel_initializer=my_init, bias_initializer=my_init))
model.add(keras.layers.Dropout(dropout_rate))
model.add(keras.layers.Dense(units=num_outputs, activation='softmax', kernel_initializer=my_init, bias_initializer=my_init))
model.compile(loss='binary_crossentropy',
              optimizer=keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-8),
              metrics=[sensitivity]
              )
history = model.fit(train_data, train_labels, epochs=num_epochs, batch_size=1024, class_weight={0: 1, 1: weight},
                    callbacks=[TestCallback(1, train_data, train_labels), TestCallback(0, test_data, test_labels)], verbose=1)

0 个答案:

没有答案