我使用train_test_split(random_state = 0
)和决策树而无需进行任何参数调整来对数据进行建模,我将其运行了约50次以达到最佳精度。
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
Laptop = pd.ExcelFile(r"D:\Laptop.xlsx", data_only=True)
data = pd.read_excel(r"D:\Laptop.xlsx",sheet_name=0)
train, test = train_test_split(data, test_size = 0.15)
print("Training size: {}; Test size: {}".format(len(train), len(test)))
c = DecisionTreeClassifier()
features = ["Brand", "Size", "CPU", "RAM", "Resolution", "Class"]
x_train = train[features]
y_train = train["K=20"]
x_test = test[features]
y_test = test["K=20"]
dt = c.fit(x_train, y_train)
y_pred = c.predict(x_test)
from sklearn.metrics import accuracy_score
score = accuracy_score(y_test, y_pred)*100
print ("Accuracy using Decision Tree:", round(score, 1), "%")
第二步,我决定使用GridSearchCV方法设置树参数。
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from scipy.stats import randint
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import RandomizedSearchCV
%matplotlib inline
Laptop = pd.ExcelFile(r"D:\Laptop.xlsx", data_only=True)
data = pd.read_excel(r"D:\Laptop.xlsx",sheet_name=0)
train, test = train_test_split(data, test_size = 0.15, random_state = 0)
print("Training size: {}; Test size: {}".format(len(train), len(test)))
features = ["Brand", "Size", "CPU", "RAM", "Resolution", "Class"]
x_train = train[features]
y_train = train["K=20"]
x_test = test[features]
y_test = test["K=20"]
from sklearn.model_selection import GridSearchCV
param_dist = {"max_depth":[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
"min_samples_leaf":randint (10,60)}
tree = DecisionTreeClassifier()
tree_cv = RandomizedSearchCV(tree, param_dist, cv=5)
tree_cv.fit(x_train, y_train)
print("Tuned Decisio Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is: {}".format(tree_cv.best_score_))
y_pred = tree_cv.predict(x_test)
from sklearn.metrics import accuracy_score
score = accuracy_score(y_test, y_pred)*100
print ("Accuracy using Decision Tree:", round(score, 1), "%")
我在第一种方法中的最佳准确性比GridSearchCV方法要好得多。
为什么会这样?
您知道获得最准确的最佳树的最佳方法吗?
答案 0 :(得分:0)
为什么会这样?
我只能推测您没有看到您的代码。它可能基于网格的粒度。如果您要进行50种组合,但是有数十亿种可能的组合,那么这对于搜索空间是没有意义的。有没有一种方法可以优化要搜索的参数?
您知道最准确地获得最佳树木的最佳方法吗?
这是一个很难的问题,因为您需要定义准确性。您可以建立一个模型,该模型将过度拟合您的测试数据。从技术上讲,获取最佳树的方法是尝试使用超参数的所有可能组合,但是对于任何合理数量的参数,这将永远花费。通常,最好的方法是使用贝叶斯方法搜索超参数空间,但是您将返回每个参数的分布。我的建议是从RandomSearch而不是GridSearch开始。如果您是Skopt的忠实拥护者,则可以使用BayesSearch。我建议阅读代码,因为我认为它的文档记录不充分。
import pandas as pd
import numpy as np
import xgboost as xgb
from skopt import BayesSearchCV
from sklearn.model_selection import StratifiedKFold
# SETTINGS - CHANGE THESE TO GET SOMETHING MEANINGFUL
ITERATIONS = 10 # 1000
TRAINING_SIZE = 100000 # 20000000
TEST_SIZE = 25000
# Classifier
bayes_cv_tuner = BayesSearchCV(
estimator = xgb.XGBClassifier(
n_jobs = 1,
objective = 'binary:logistic',
eval_metric = 'auc',
silent=1,
tree_method='approx'
),
search_spaces = {
'learning_rate': (0.01, 1.0, 'log-uniform'),
'min_child_weight': (0, 10),
'max_depth': (0, 50),
'max_delta_step': (0, 20),
'subsample': (0.01, 1.0, 'uniform'),
'colsample_bytree': (0.01, 1.0, 'uniform'),
'colsample_bylevel': (0.01, 1.0, 'uniform'),
'reg_lambda': (1e-9, 1000, 'log-uniform'),
'reg_alpha': (1e-9, 1.0, 'log-uniform'),
'gamma': (1e-9, 0.5, 'log-uniform'),
'min_child_weight': (0, 5),
'n_estimators': (50, 100),
'scale_pos_weight': (1e-6, 500, 'log-uniform')
},
scoring = 'roc_auc',
cv = StratifiedKFold(
n_splits=3,
shuffle=True,
random_state=42
),
n_jobs = 3,
n_iter = ITERATIONS,
verbose = 0,
refit = True,
random_state = 42
)
result = bayes_cv_tuner.fit(X.values, y.values)
跳过:https://scikit-optimize.github.io/
代码:https://github.com/scikit-optimize/scikit-optimize/blob/master/skopt/searchcv.py
答案 1 :(得分:0)
这取决于您为GridSearchCV指定的参数限制。
不带任何参数的决策树的默认参数值不在您手动指定的范围内。选择一组更好的参数,然后再次尝试GridSearchCV。