首先,归功于kamezekase的优秀实现,用于返回决策树矩形的x,y顶点以进行低密集绘图。实施如下:
import numpy as np
from collections import deque
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import _tree as ctree
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
class AABB:
"""Axis-aligned bounding box"""
def __init__(self, n_features):
self.limits = np.array([[-np.inf, np.inf]] * n_features)
def split(self, f, v):
left = AABB(self.limits.shape[0])
right = AABB(self.limits.shape[0])
left.limits = self.limits.copy()
right.limits = self.limits.copy()
left.limits[f, 1] = v
right.limits[f, 0] = v
return left, right
def tree_bounds(tree, n_features=None):
"""Compute final decision rule for each node in tree"""
if n_features is None:
n_features = np.max(tree.feature) + 1
aabbs = [AABB(n_features) for _ in range(tree.node_count)]
queue = deque([0])
while queue:
i = queue.pop()
l = tree.children_left[i]
r = tree.children_right[i]
if l != ctree.TREE_LEAF:
aabbs[l], aabbs[r] = aabbs[i].split(tree.feature[i], tree.threshold[i])
queue.extend([l, r])
return aabbs
def decision_areas(tree_classifier, maxrange, x=0, y=1, n_features=None):
""" Extract decision areas.
tree_classifier: Instance of a sklearn.tree.DecisionTreeClassifier
maxrange: values to insert for [left, right, top, bottom] if the interval is open (+/-inf)
x: index of the feature that goes on the x axis
y: index of the feature that goes on the y axis
n_features: override autodetection of number of features
"""
tree = tree_classifier.tree_
aabbs = tree_bounds(tree, n_features)
rectangles = []
for i in range(len(aabbs)):
if tree.children_left[i] != ctree.TREE_LEAF:
continue
l = aabbs[i].limits
r = [l[x, 0], l[x, 1], l[y, 0], l[y, 1], np.argmax(tree.value[i])]
rectangles.append(r)
rectangles = np.array(rectangles)
rectangles[:, [0, 2]] = np.maximum(rectangles[:, [0, 2]], maxrange[0::2])
rectangles[:, [1, 3]] = np.minimum(rectangles[:, [1, 3]], maxrange[1::2])
return rectangles
我的目标:
我的任务是返回错误分类数据点的数量;我需要对模型的每个可能分类进行计数。如果我们使用教科书虹膜数据集,那么3个类。也就是说,对于我们提取的所有矩形,我还想包括有多少数据点被相应的矩形错误地限制。因此,对于每个矩形,应该有一个类具有错误分类的数据点= 0,因为它是预测区域,或者如上所示np.argmax(tree.value[i])
。
我想将计数存储为r
变量的一部分。也许如下,(为了清楚起见使用冗长的变量):
r = [l[x, 0], l[x, 1], l[y, 0], l[y, 1], np.argmax(tree.value[i]), class1_number_of_incorrect_in_this_rect, class2_number_of_incorrect_in_this_rect, class3_number_of_incorrect_in_this_rect]
尽管他的代码非常干净/直观,但我找不到合适的位置来添加迭代计算。我也对变量的命名约定感到困惑,这里y
是y_axis,而不是目标,所以它对于我要比较的内容有点模糊{tree.predict()
1}}。而且我也不确定是否存在对该实现中树正在处理的可能类的总数的现有引用。我认为sklearn至少会有内置功能。</ p>
简而言之;我甚至对我的目标采取了正确的概念方法吗?