因此,在使用Pipeline和CrossValidator之后,我无法从PySpark模型中提取超参数。
我在StackOverflow上找到了以下答案: How to extract model hyper-parameters from spark.ml in PySpark?
这非常有帮助,以下代码对我有用:
modelOnly.bestModel.stages[-1]._java_obj.parent().getRegParam()
新问题是我正在运行MLP,并且尝试提取图层时,我得到的是随机字符串而不是类似Python列表的字符串。
结果:
StepSize: 0.03
Layers: [I@db98c25
我的代码大致是:
trainer = MultilayerPerceptronClassifier(featuresCol='features',
labelCol='label',
predictionCol='prediction',
maxIter=100,
tol=1e-06,
seed=1331,
layers=layers1,
blockSize=128,
stepSize=0.03,
solver='l-bfgs',
initialWeights=None,
probabilityCol='probability',
rawPredictionCol='rawPrediction')
pipeline = Pipeline(stages=[assembler1,stringIdx,trainer])
paramGrid = ParamGridBuilder() \
.addGrid(trainer.maxIter, [10]) \
.addGrid(trainer.tol, [1e-06]) \
.addGrid(trainer.stepSize, [0.03]) \
.addGrid(trainer.layers, [layers2]) \
.build()
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(metricName="accuracy"),
numFolds=3)
cvModel = crossval.fit(df)
mybestmodel = cvModel.bestModel
java_model = mybestmodel.stages[-1]._java_obj
print("StepSize: ", end='')
print(java_model.parent().getStepSize())
print("Layers: ", end='')
print(java_model.parent().getLayers())
我正在运行Spark 2.3.2。
我想念什么?
谢谢:)
答案 0 :(得分:2)
这不是随机字符串,而是representation of the corresponding Java object。
理论上您可以
[x for x in mybestmodel.stages[-1]._java_obj.parent().getLayers()]
there is really no need for that
layers
包括输入和输出图层的图层大小数组。
1.6.0版中的新功能。
即
mybestmodel.stages[-1].layers