我试图调整使用隐式数据的ALS矩阵分解模型的参数。为此,我尝试使用pyspark.ml.tuning.CrossValidator来运行参数网格并选择最佳模型。我相信我的问题出在评估员身上,但我无法弄清楚。
我可以使用回归RMSE评估器为显式数据模型工作,如下所示:
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql.functions import rand
conf = SparkConf() \
.setAppName("MovieLensALS") \
.set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
dfRatings = sqlContext.createDataFrame([(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
["user", "item", "rating"])
dfRatingsTest = sqlContext.createDataFrame([(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)], ["user", "item"])
alsExplicit = ALS()
defaultModel = alsExplicit.fit(dfRatings)
paramMapExplicit = ParamGridBuilder() \
.addGrid(alsExplicit.rank, [8, 12]) \
.addGrid(alsExplicit.maxIter, [10, 15]) \
.addGrid(alsExplicit.regParam, [1.0, 10.0]) \
.build()
evaluatorR = RegressionEvaluator(metricName="rmse", labelCol="rating")
cvExplicit = CrossValidator(estimator=alsExplicit, estimatorParamMaps=paramMapExplicit, evaluator=evaluatorR)
cvModelExplicit = cvExplicit.fit(dfRatings)
predsExplicit = cvModelExplicit.bestModel.transform(dfRatingsTest)
predsExplicit.show()
当我尝试对隐含数据执行此操作时(让我们说视图计数而不是评级),我得到一个我无法理解的错误。这是代码(与上面非常相似):
dfCounts = sqlContext.createDataFrame([(0,0,0), (0,1,12), (0,2,3), (1,0,5), (1,1,9), (1,2,0), (2,0,0), (2,1,11), (2,2,25)],
["user", "item", "rating"])
dfCountsTest = sqlContext.createDataFrame([(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)], ["user", "item"])
alsImplicit = ALS(implicitPrefs=True)
defaultModelImplicit = alsImplicit.fit(dfCounts)
paramMapImplicit = ParamGridBuilder() \
.addGrid(alsImplicit.rank, [8, 12]) \
.addGrid(alsImplicit.maxIter, [10, 15]) \
.addGrid(alsImplicit.regParam, [1.0, 10.0]) \
.addGrid(alsImplicit.alpha, [2.0,3.0]) \
.build()
evaluatorB = BinaryClassificationEvaluator(metricName="areaUnderROC", labelCol="rating")
evaluatorR = RegressionEvaluator(metricName="rmse", labelCol="rating")
cv = CrossValidator(estimator=alsImplicit, estimatorParamMaps=paramMapImplicit, evaluator=evaluatorR)
cvModel = cv.fit(dfCounts)
predsImplicit = cvModel.bestModel.transform(dfCountsTest)
predsImplicit.show()
我尝试使用RMSE评估程序执行此操作,但是出现错误。据我所知,我还应该能够将AUC度量用于二元分类评估器,因为隐式矩阵分解的预测是用于预测二进制矩阵p_ui per this paper的置信矩阵c_ui,其文档为pyspark ALS引用。
使用任何一个评估器都会给我一个错误,我无法找到任何有关在线交叉验证隐式ALS模型的富有成效的讨论。我正在查看CrossValidator源代码,试图找出错误,但遇到了麻烦。我的一个想法是,在该过程将隐式数据矩阵r_ui转换为二进制矩阵p_ui和置信矩阵c_ui之后,我不确定它在评估阶段期间将预测的c_ui矩阵与其进行比较。 / p>
这是错误:
Traceback (most recent call last):
File "<ipython-input-16-6c43b997005e>", line 1, in <module>
cvModel = cv.fit(dfCounts)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\pipeline.py", line 69, in fit
return self._fit(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\tuning.py", line 239, in _fit
model = est.fit(train, epm[j])
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\pipeline.py", line 67, in fit
return self.copy(params)._fit(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\wrapper.py", line 133, in _fit
java_model = self._fit_java(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\wrapper.py", line 130, in _fit_java
return self._java_obj.fit(dataset._jdf)
File "C:\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\sql\utils.py", line 45, in deco
return f(*a, **kw)
File "C:\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
format(target_id, ".", name), value)
etc.......
更新
我尝试缩放输入,使其在0到1的范围内并使用RMSE评估程序。它似乎工作得很好,直到我尝试将其插入CrossValidator。
以下代码有效。我得到预测,并从评估员那里得到RMSE值。
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import FloatType
import pyspark.sql.functions as F
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
conf = SparkConf() \
.setAppName("ALSPractice") \
.set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
# Users 0, 1, 2, 3 - Items 0, 1, 2, 3, 4, 5 - Ratings 0.0-5.0
dfCounts2 = sqlContext.createDataFrame([(0,0,5.0), (0,1,5.0), (0,3,0.0), (0,4,0.0),
(1,0,5.0), (1,2,4.0), (1,3,0.0), (1,4,0.0),
(2,0,0.0), (2,2,0.0), (2,3,5.0), (2,4,5.0),
(3,0,0.0), (3,1,0.0), (3,3,4.0) ],
["user", "item", "rating"])
dfCountsTest2 = sqlContext.createDataFrame([(0,0), (0,1), (0,2), (0,3), (0,4),
(1,0), (1,1), (1,2), (1,3), (1,4),
(2,0), (2,1), (2,2), (2,3), (2,4),
(3,0), (3,1), (3,2), (3,3), (3,4)], ["user", "item"])
# Normalize rating data to [0,1] range based on max rating
colmax = dfCounts2.select(F.max('rating')).collect()[0].asDict().values()[0]
normalize = udf(lambda x: x/colmax, FloatType())
dfCountsNorm = dfCounts2.withColumn('ratingNorm', normalize(col('rating')))
alsImplicit = ALS(implicitPrefs=True)
defaultModelImplicit = alsImplicit.fit(dfCountsNorm)
preds = defaultModelImplicit.transform(dfCountsTest2)
evaluatorR2 = RegressionEvaluator(metricName="rmse", labelCol="ratingNorm")
evaluatorR2.evaluate(defaultModelImplicit.transform(dfCountsNorm))
preds = defaultModelImplicit.transform(dfCountsTest2)
我不明白为什么以下不起作用。我使用相同的估算器,相同的评估器并拟合相同的数据。为什么这些工作在上面但不在CrossValidator中:
paramMapImplicit = ParamGridBuilder() \
.addGrid(alsImplicit.rank, [8, 12]) \
.addGrid(alsImplicit.maxIter, [10, 15]) \
.addGrid(alsImplicit.regParam, [1.0, 10.0]) \
.addGrid(alsImplicit.alpha, [2.0,3.0]) \
.build()
cv = CrossValidator(estimator=alsImplicit, estimatorParamMaps=paramMapImplicit, evaluator=evaluatorR2)
cvModel = cv.fit(dfCountsNorm)
答案 0 :(得分:9)
忽略技术问题,严格来说,鉴于ALS使用隐式反馈生成的输入,两种方法都不正确。
RegressionEvaluator
,因为正如您所知,预测可以被解释为置信度值,并且表示为范围[0,1]中的浮点数,而标签列只是一个未绑定的整数。这些值显然无法比较。BinaryClassificationEvaluator
,因为即使预测可以被解释为概率标签也不代表二元决策。此外,预测列的类型无效,无法直接与BinaryClassificationEvaluator
您可以尝试转换其中一列,使输入符合要求,但从理论角度来看,这不是一种合理的方法,并引入了难以调整的其他参数。
将标签列映射到[0,1]范围并使用RMSE。
将标签列转换为具有固定阈值的二进制指示符,并扩展ALS
/ ALSModel
以返回预期的列类型。假设阈值为1,则可能是这样的
from pyspark.ml.recommendation import *
from pyspark.sql.functions import udf, col
from pyspark.mllib.linalg import DenseVector, VectorUDT
class BinaryALS(ALS):
def fit(self, df):
assert self.getImplicitPrefs()
model = super(BinaryALS, self).fit(df)
return ALSBinaryModel(model._java_obj)
class ALSBinaryModel(ALSModel):
def transform(self, df):
transformed = super(ALSBinaryModel, self).transform(df)
as_vector = udf(lambda x: DenseVector([1 - x, x]), VectorUDT())
return transformed.withColumn(
"rawPrediction", as_vector(col("prediction")))
# Add binary label column
with_binary = dfCounts.withColumn(
"label_binary", (col("rating") > 0).cast("double"))
als_binary_model = BinaryALS(implicitPrefs=True).fit(with_binary)
evaluatorB = BinaryClassificationEvaluator(
metricName="areaUnderROC", labelCol="label_binary")
evaluatorB.evaluate(als_binary_model.transform(with_binary))
## 1.0
一般来说,有关评估具有隐式反馈的推荐系统的材料在教科书中有点缺失,我建议您阅读eliasah answer关于评估这类推荐人的内容。
答案 1 :(得分:0)
通过隐式反馈,我们没有用户对我们建议的反应。因此,我们不能使用基于精度的指标。
在已经cited paper中,使用预期的百分位排名指标。
您可以尝试在Spark ML lib中基于类似的指标实施Evaluator,并在交叉验证管道中使用它。
答案 2 :(得分:0)
在这里参加聚会很晚,但是我会发帖,以防有人像我一样偶然发现这个问题。
尝试将CrossValidator
与ALS模型结合使用时,出现类似的错误。我通过将ALS
中的coldStartStrategy参数设置为“ drop”来解决它。那是:
alsImplicit = ALS(implicitPrefs=True, coldStartStrategy="drop")
,其余代码保持不变。
我希望在我的示例中发生的事情是交叉验证拆分创建的场景,其中我的验证集中的项目没有出现在训练集中,这会导致NaN预测值。最佳解决方案是在评估时降低NaN值,如documentation中所述。
我不知道我们是否遇到相同的错误,因此不能保证这将解决OP的问题,但是无论如何,最好将ColdStartStrategy =“ drop”设置为进行交叉验证。
注意:,我的错误消息是“参数必须是参数映射或参数映射的列表/元组” ,但这似乎并不意味着问题使用coldStartStrategy参数或NaN值,但是此解决方案解决了该错误。
答案 3 :(得分:0)
为了将我的 ALS
模型与 implicitPrefs=True
进行交叉验证,我需要针对 pyspark==2.3.0
稍微调整 @zero323 的答案,其中出现以下异常:
xspy4j.Py4JException: Target Object ID does not exist for this gateway :o2733\\n\tat py4j.Gateway.invoke(Gateway.java...java:79)\\n\tat py4j.GatewayConnection.run(GatewayConnection.java:214)\\n\tat java.lang.Thread.run(Thread.java:748)\\n
ALS
扩展了 JavaEstimator
,它提供了拟合包装 Java/Scala 实现的 Estimator
所需的钩子。我们需要在 _create_model
中覆盖 BinaryALS
以便 PySpark 可以保持所有 Java 对象引用直接:
import pyspark.sql.functions as F
from pyspark.ml.linalg import DenseVector, VectorUDT
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.sql.dataframe import DataFrame
class ALSBinaryModel(ALSModel):
def transform(self, df: DataFrame) -> DataFrame:
transformed = super().transform(df)
as_vector = F.udf(lambda x: DenseVector([1 - x, x]), VectorUDT())
return transformed.withColumn("rawPrediction", as_vector(F.col("prediction")))
class BinaryALS(ALS):
def fit(self, df: DataFrame) -> ALSBinaryModel:
assert self.getImplicitPrefs()
return super().fit(df)
def _create_model(self, java_model) -> ALSBinaryModel:
return ALSBinaryModel(java_model=java_model)