pyspark - 如何交叉验证几种ML算法

时间:2018-04-22 05:28:32

标签: pyspark apache-spark-ml

我希望能够选择最合适的算法,并使用它最好的参数。 我怎样才能一次性完成,不为每个算法创建少量管道,也不对交叉验证中的特定算法无关的参数进行检查? 即我想检查逻辑回归对随机森林的影响。 我的代码是:

    lr = LogisticRegression().setFamily("multinomial")
    # Chain indexer and tree in a Pipeline
    pipeline = Pipeline(stages=[labelIndexer,labelIndexer2, assembler, lr , labelconverter])

    paramGrid = ParamGridBuilder() \
        .addGrid(lr.regParam, [0.1, 0.3, 0.01]) \
        .addGrid(lr.elasticNetParam, [0.1, 0.8, 0.01]) \
        .addGrid(lr.maxIter, [10, 20, 25]) \
        .build()

    crossval = CrossValidator(estimator=pipeline,
                              estimatorParamMaps=paramGrid,
                              evaluator=RegressionEvaluator(),
                              numFolds=2)  # use 3+ folds in practice

    # Train model.  This also runs the indexer.
    model = crossval.fit(trainingData)

1 个答案:

答案 0 :(得分:0)

我在Python / Pyspark中写了一个快速而肮脏的解决方法。它有点原始(它没有对应的Scala类),我认为它缺少保存/加载功能,但这可能是您案例的起点。最终,它可能会成为Spark中的一项新功能,很高兴拥有。

这个想法是要有一个特殊的管道阶段,该阶段充当不同对象之间的切换,并维护一个字典以字符串来引用它们。用户可以按名称启用一个或另一个。它们可以是Estimator,Transformers或两者混合使用-用户有责任保持管道中的一致性(做有意义的事情,后果自负)。具有启用阶段名称的参数可以包含在要交叉验证的网格中。

from pyspark.ml.wrapper import JavaEstimator
from pyspark.ml.base import Estimator, Transformer, Param, Params, TypeConverters

class PipelineStageChooser(JavaEstimator):
    
    selectedStage = Param(Params._dummy(), "selectedStage", "key of the selected stage in the dict",
                      typeConverter=TypeConverters.toString)

    stagesDict = None
    _paramMap = {}

    def __init__(self, stagesDict, selectedStage):
        super(PipelineStageChooser, self).__init__()
        self.stagesDict = stagesDict
        if selectedStage not in self.stagesDict.keys():
            raise KeyError("selected stage {0} not found in stagesDict".format(selectedStage)) 

        if isinstance(self.stagesDict[selectedStage], Transformer):       
            self.fittedSelectedStage = self.stagesDict[selectedStage]

        for stage in stagesDict.values():
            if not (isinstance(stage, Estimator) or isinstance(stage, Transformer)):
                raise TypeError("Cannot recognize a pipeline stage of type %s." % type(stage))     
        
        self._set(selectedStage=selectedStage)
        self._java_obj = None

    def fit(self, dataset, params=None): 
        selectedStage_str = self.getOrDefault(self.selectedStage)
        if isinstance(self.stagesDict[selectedStage_str], Estimator):
            return self.stagesDict[selectedStage_str].fit(dataset, params = params)
        elif isinstance(self.stagesDict[selectedStage_str], Transformer):
            return self.stagesDict[selectedStage_str]

使用示例:

count_vectorizer = CountVectorizer() # set params
hashing_tf = HashingTF() # set params
chooser = PipelineStageChooser(stagesDict={"count_vectorizer": count_vectorizer, 
                                           "hashing_tf": hashing_tf},
                               selectedStage="count_vectorizer")

pipeline = Pipeline(stages = [chooser])

# Test which among CountVectorizer or HashingTF works better to create features 
# Could be used as well to decide between different ML algorithms
paramGrid = ParamGridBuilder() \
    .addGrid(chooser.selectedStage, ["count_vectorizer", "hashing_tf"])\
    .build()