我正在玩XGBoost的新数据集。以下是我的代码:
import xgboost as xgb
import pandas as pd
import numpy as np
train = pd.read_csv("train_users_processed_onehot.csv")
labels = train["Buy"].map({"Y":1, "N":0})
features = train.drop("Buy", axis=1)
data_dmat = xgb.DMatrix(data=features, label=labels)
params={"max_depth":5, "min_child_weight":2, "eta": 0.1, "subsamples":0.9, "colsample_bytree":0.8, "objective" : "binary:logistic", "eval_metric": "logloss", "seed": 2333}
rounds = 6000
result = xgb.cv(params=params, dtrain=data_dmat, num_boost_round=rounds, early_stopping_rounds=50, as_pandas=True, seed=2333)
print result
结果是(省略中间结果):
test-logloss-mean test-logloss-std train-logloss-mean
0 0.683354 0.000058 0.683206
165 0.622318 0.000661 0.607680
但是当我尝试使用GridSearchCV
进行参数调整时,我发现结果完全不同。更具体地说,这是我的代码:
import xgboost as xgb
from sklearn.model_selection import GridSearchCV
from xgboost.sklearn import XGBClassifier
import numpy as np
import pandas as pd
train_dataframe = pd.read_csv("train_users_processed_onehot.csv")
train_labels = train_dataframe["Buy"].map({"Y":1, "N":0})
train_features = train_dataframe.drop("Buy", axis=1)
params = {"max_depth": [5], "min_child_weight": [2]}
estimator = XGBClassifier(learning_rate=0.1, n_estimators=170, max_depth=2, min_child_weight=4, objective="binary:logistic", subsample=0.9, colsample_bytree=0.8, seed=2333)
gsearch1 = GridSearchCV(estimator, param_grid=params, n_jobs=4, iid=False, verbose=1, scoring="neg_log_loss")
gsearch1.fit(train_features.values, train_labels.values)
print pd.DataFrame(gsearch1.cv_results_)
print gsearch1.best_params_
print -gsearch1.best_score_
我得到了:
mean_fit_time mean_score_time mean_test_score mean_train_score
0 87.71497 0.209772 -3.134132 -0.567306
很明显3.134132与0.622318非常不同。这是什么原因?
谢谢!
答案 0 :(得分:0)
您将不同的参数传递给两者:
您传递给sklearn的参数更加保守(您不太可能过度拟合模型),因此算法不会尝试将模型与数据相匹配。反过来,第二个你得分较低 - 确切地说应该是什么。