我正试图通过ALS
调整Spark(PySpark)TrainValidationSplit
模型的超参数。
效果很好,但我想知道哪种超参数组合最好。评估后如何获得最好的参数?
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import TrainValidationSplit, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
df = sqlCtx.createDataFrame(
[(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
["user", "item", "rating"],
)
df_test = sqlCtx.createDataFrame(
[(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)],
["user", "item"],
)
als = ALS()
param_grid = ParamGridBuilder().addGrid(
als.rank,
[10, 15],
).addGrid(
als.maxIter,
[10, 15],
).build()
evaluator = RegressionEvaluator(
metricName="rmse",
labelCol="rating",
)
tvs = TrainValidationSplit(
estimator=als,
estimatorParamMaps=param_grid,
evaluator=evaluator,
)
model = tvs.fit(df)
问题:如何获得最佳排名和最大化?
答案 0 :(得分:4)
您可以使用bestModel
的TrainValidationSplitModel
属性访问最佳模型:
best_model = model.bestModel
best_model.rank
10
获得最大迭代次数需要更多的诡计:
(best_model
._java_obj # Get Java object
.parent() # Get parent (ALS estimator)
.getMaxIter()) # Get maxIter
10