使用Spark_sklearn GridSearchCV作为内部cv执行嵌套交叉验证,并将sklearn cross_validate / cross_val_score作为外部cv执行"您似乎尝试从广播变量,操作或转换"中引用SparkContext。错误。
inner_cv = StratifiedKFold(n_splits=2, shuffle=True, random_state=42)
outer_cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
scoring_metric = ['roc_auc', 'average_precision', 'precision']
gs = GridSearchCV(sparkcontext, estimator=RandomForestClassifier(
class_weight='balanced_subsample', n_jobs=-1),
param_grid=[{"max_depth": [5], "max_features": [.5, .8],
"min_samples_split": [2], "min_samples_leaf": [1, 2, 5, 10],
"bootstrap": [True, False], "criterion": ["gini", "entropy"],
"n_estimators": [300]}],
scoring=scoring_metric, cv=inner_cv, verbose=verbose, n_jobs=-1,
refit='roc_auc', return_train_score=False)
scores = cross_validate(gs, X, y, cv=outer_cv, scoring=scoring_metric, n_jobs=-1,
return_train_score=False)
我已尝试将n_jobs=-1
设置为n_jobs=1
以删除基于joblib的并行性,然后重试但仍会产生相同的异常。
异常:您似乎尝试从广播变量,操作或转换引用SparkContext。 SparkContext只能在驱动程序上使用,而不能在工作程序上运行的代码中使用。有关更多信息,请参阅SPARK-5063。
Complete Traceback (most recent call last):
File "model_evaluation.py", line 350, in <module>
main()
File "model_evaluation.py", line 269, in main
scores = cross_validate(gs, X, y, cv=outer_cv, scoring=scoring_metric, n_jobs=-1, return_train_score=False)
File "../python27/lib/python2.7/site-packages/sklearn/model_selection/_validation.py", line 195, in cross_validate
for train, test in cv.split(X, y, groups))
File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 779, in __call__
while self.dispatch_one_batch(iterator):
File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 620, in dispatch_one_batch
tasks = BatchedCalls(itertools.islice(iterator, batch_size))
File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 127, in __init__
self.items = list(iterator_slice)
File "../python27/lib/python2.7/site-packages/sklearn/model_selection/_validation.py", line 195, in <genexpr>
for train, test in cv.split(X, y, groups))
File "../python27/lib/python2.7/site-packages/sklearn/base.py", line 61, in clone
new_object_params[name] = clone(param, safe=False)
File "../python27/lib/python2.7/site-packages/sklearn/base.py", line 52, in clone
return copy.deepcopy(estimator)
File "/usr/local/lib/python2.7/copy.py", line 182, in deepcopy
rv = reductor(2)
File "/usr/local/lib/spark/python/pyspark/context.py", line 279, in __getnewargs__
"It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast
variable, action, or transformation. SparkContext can only be used on the driver, not
in code that it run on workers. For more information, see SPARK-5063.
编辑: 似乎问题是sklearn cross_validate()以类似于挑选PySpark GridsearchCV估计器不允许的估计器对象的方式克隆每个拟合的估计器,因为SparkContext()对象不能/不应该被pickle。那么我们如何正确克隆估算器?
答案 0 :(得分:1)
我终于找到了解决方案。当scikit-learn clone()函数尝试深度检查SparkContext对象时,会发生此问题。我使用的解决方案有点hacky,如果有更好的解决方案,我肯定会采取另一种方式,但它确实有效。导入复制类并覆盖deepcopy()函数,以便在看到SparkContext对象时忽略它。
# Mock the deep-copy function to ignore copying sparkcontext objects
# Helps avoid pickling error or broadcast variable errors
import copy
_deepcopy = copy.deepcopy
def mock_deepcopy(*args, **kwargs):
if isinstance(args[0], SparkContext):
return args[0]
return _deepcopy(*args, **kwargs)
copy.deepcopy = mock_deepcopy
所以现在它不会尝试复制SparkContext对象,所有似乎都能正常工作。