我正在尝试通过GridSearchCV找到最好的xgboost模型,并且作为cross_validation我想使用April目标数据。这是代码:
x_train.head()
y_train.head()
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.metrics import make_scorer
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import TimeSeriesSplit
import xgboost as xg
xgb_parameters={'max_depth':[3,5,7,9],'min_child_weight':[1,3,5]}
xgb=xg.XGBRegressor(learning_rate=0.1, n_estimators=100,max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8)
model=GridSearchCV(n_jobs=2,estimator=xgb,param_grid=xgb_parameters,cv=train_test_split(x_train,y_train,test_size=len(y_train['2016-04':'2016-04']), random_state=42, shuffle=False),scoring=my_func)
model.fit(x_train,y_train)
model.grid_scores_
model.best_params_
但是我在训练模型时遇到了这个错误。
有人可以帮帮我吗?或者有人可以建议我如何在上个月将非清洗数据拆分为训练/测试以验证模型?
感谢您的帮助
答案 0 :(得分:1)
此错误的根本原因是您在cv
电话中使用GridSearchCV()
参数的方式:
cv=train_test_split(x_train,y_train,test_size=len(y_train['2016-04':'2016-04'])
以下是cv
参数的文档字符串的摘录:
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross validation,
- integer, to specify the number of folds in a `(Stratified)KFold`,
- An object to be used as a cross-validation generator.
- An iterable yielding train, test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
然而train_test_split(x_train,y_train)
返回4个数组:
X_train, X_test, y_train, y_test
这导致:ValueError too many values to unpack (expected 2)
错误。
作为一种解决方法,您可以指定上面指定的选项之一(cv
参数的docstring)...