我们从https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity $ I_G = 1- \ sum_ {i = 1} ^ j p_i ^ 2 $。
但是从https://www.kaggle.com/batzner/gini-coefficient-an-intuitive-explanation开始,我们有:
def gini(actual, pred):
assert (len(actual) == len(pred))
all = np.asarray(np.c_[actual, pred, np.arange(len(actual))], dtype=np.float)
all = all[np.lexsort((all[:, 2], -1 * all[:, 1]))]
totalLosses = all[:, 0].sum()
giniSum = all[:, 0].cumsum().sum() / totalLosses
giniSum -= (len(actual) + 1) / 2.
return giniSum / len(actual)
def gini_normalized(actual, pred):
return gini(actual, pred) / gini(actual, actual)
这两个为什么不同。第二个代码是否有参考,它们每个都试图计算什么?