使用RandomForestClassifier了解TreeInterpreter的输出

时间:2018-02-16 12:16:44

标签: python machine-learning scikit-learn random-forest

我已应用随机林分类器来获取为日期集中的特定行做出贡献的功能。但是,我为该功能获得了2个值,而不是一个。我不太清楚为什么。这是我的代码。

import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from treeinterpreter import treeinterpreter as ti
from treeinterpreter import treeinterpreter as ti

X, y = make_classification(n_samples=1000,
                           n_features=6,
                           n_informative=3,
                           n_classes=2,
                           random_state=0,
                           shuffle=False)

# Creating a dataFrame
df = pd.DataFrame({'Feature 1':X[:,0],
                                  'Feature 2':X[:,1],
                                  'Feature 3':X[:,2],
                                  'Feature 4':X[:,3],
                                  'Feature 5':X[:,4],
                                  'Feature 6':X[:,5],
                                  'Class':y})


y_train = df['Class']
X_train = df.drop('Class',axis = 1)

rf = RandomForestClassifier(n_estimators=50,
                               random_state=0)

rf.fit(X_train, y_train)

print ("-"*20) 

importances = rf.feature_importances_

indices = X_train.columns

instances = X_train.loc[[60]]

print(rf.predict(instances))

print ("-"*20) 

prediction, biases, contributions = ti.predict(rf, instances)


for i in range(len(instances)):
    print ("Instance", i)
    print ("-"*20) 
    print ("Bias (trainset mean)", biases[i])
    print ("-"*20) 
    print ("Feature contributions:")
    print ("-"*20) 

    for c, feature in sorted(zip(contributions[i], 
                                 indices), 
                             key=lambda x: ~abs(x[0].any())):

        print (feature, np.round(c, 3))

    print ("-"*20) 

这是我的代码的输出。有人可以解释为什么偏差和特征输出2个值而不是一个吗?

--------------------
[0]
--------------------
Instance 0
--------------------
Bias (trainset mean) [ 0.49854  0.50146]
--------------------
Feature contributions:
--------------------
Feature 1 [ 0.16 -0.16]
Feature 2 [-0.024  0.024]
Feature 3 [-0.154  0.154]
Feature 4 [ 0.172 -0.172]
Feature 5 [ 0.029 -0.029]
Feature 6 [ 0.019 -0.019]

1 个答案:

答案 0 :(得分:3)

你得到的长度为2的数组用于偏见和特征贡献,原因很简单,因为你有2级分类问题。

正如软件包创建者在this blog post中清楚解释的那样,在虹膜数据集的3级情况下,您得到长度为3的数组(即每个类的一个数组元素):

from treeinterpreter import treeinterpreter as ti
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
iris = load_iris()

rf = RandomForestClassifier(max_depth = 4)
idx = range(len(iris.target))
np.random.shuffle(idx)

rf.fit(iris.data[idx][:100], iris.target[idx][:100])

prediction, bias, contributions = ti.predict(rf, instance)
print "Prediction", prediction
print "Bias (trainset prior)", bias
print "Feature contributions:"
for c, feature in zip(contributions[0], 
                             iris.feature_names):
    print feature, c

给出:

Prediction [[ 0. 0.9 0.1]]
Bias (trainset prior) [[ 0.36 0.262 0.378]]
Feature contributions:
sepal length (cm) [-0.1228614 0.07971035 0.04315104]
sepal width (cm) [ 0. -0.01352012 0.01352012]
petal length (cm) [-0.11716058 0.24709886 -0.12993828]
petal width (cm) [-0.11997802 0.32471091 -0.20473289]

公式

prediction = bias + feature_1_contribution + ... + feature_n_contribution
在分类问题的情况下,来自TreeInterpreter

适用于每个类的 ;因此,对于k级分类问题,各个数组的长度为k(在您的示例中为k = 2,而对于虹膜数据集k = 3)。