我想比较adaboost
和决策树。作为原理证明,我将adaboost
中的估算器数量设置为1
,并将决策树分类器作为默认值,期望与简单决策树具有相同的结果。
我确实在预测测试标签方面具有相同的准确性。但是,adaboost
的拟合时间要低得多,而测试时间要高一些。 Adaboost
似乎使用与DecisionTreeClassifier
相同的默认设置,否则,准确性将不完全相同。
任何人都能解释一下吗?
代码
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
print("creating classifier")
clf = AdaBoostClassifier(n_estimators = 1)
clf2 = DecisionTreeClassifier()
print("starting to fit")
time0 = time()
clf.fit(features_train,labels_train) #fit adaboost
fitting_time = time() - time0
print("time for fitting adaboost was", fitting_time)
time0 = time()
clf2.fit(features_train,labels_train) #fit dtree
fitting_time = time() - time0
print("time for fitting dtree was", fitting_time)
time1 = time()
pred = clf.predict(features_test) #test adaboost
test_time = time() - time1
print("time for testing adaboost was", test_time)
time1 = time()
pred = clf2.predict(features_test) #test dtree
test_time = time() - time1
print("time for testing dtree was", test_time)
accuracy_ada = accuracy_score(pred, labels_test) #acc ada
print("accuracy for adaboost is", accuracy_ada)
accuracy_dt = accuracy_score(pred, labels_test) #acc dtree
print("accuracy for dtree is", accuracy_dt)
输出
('time for fitting adaboost was', 3.8290421962738037)
('time for fitting dtree was', 85.19442415237427)
('time for testing adaboost was', 0.1834099292755127)
('time for testing dtree was', 0.056527137756347656)
('accuracy for adaboost is', 0.99089874857792948)
('accuracy for dtree is', 0.99089874857792948)
答案 0 :(得分:2)
我试图在IPython中重复你的实验,但我没有看到这么大的差异:
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
import numpy as np
x = np.random.randn(3785,16000)
y = (x[:,0]>0.).astype(np.float)
clf = AdaBoostClassifier(n_estimators = 1)
clf2 = DecisionTreeClassifier()
%timeit clf.fit(x,y)
1 loop, best of 3: 5.56 s per loop
%timeit clf2.fit(x,y)
1 loop, best of 3: 5.51 s per loop
尝试使用分析器,或首先重复实验。
答案 1 :(得分:1)
您在以下几行中定义了两个分类器:
clf = AdaBoostClassifier(n_estimators = 1)
clf2 = DecisionTreeClassifier()
实际上定义了非常不同的分类器。在第一种情况(clf
)中,您要定义单个(n_estimators = 1
)max_depth=1
决策树。在文档中对此进行了解释:
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
说明之处:
“基本估计量为DecisionTreeClassifier(max_depth = 1)”
对于第二种情况(clf2
),您要使用max_depth
定义决策树,该决策树由使所有叶子变为纯净所需的数量确定。同样,您可以通过阅读文档来找到答案:
故事的寓意是:阅读文档!