Spark(scala)在嵌套数组中反转StringIndexer

时间:2018-01-07 10:48:11

标签: scala apache-spark pyspark apache-spark-sql apache-spark-ml

我有一个隐式的ALS模型,我使用recommendedForAllUsers获得X推荐,问题是我得到的是用户和项目的索引值:

+-------+--------------------+                                                  
|users  |     items          |
+-------+--------------------+
|   1580|[[34,0.20143434],...|
|   4900|[[22,0.3178908], ...|
|   5300|[[5,0.025709413],...|
|   6620|[[22,2.9114444E-9...|
|   7240|[[5,0.048516575],...|
+-------+--------------------+

我希望将它们转换为原始字符串表示形式。

我尝试按照此处建议的解决方案: PySpark reversing StringIndexer in nested array

但它在pyspark中并且我很难将其解析为scala,因为pyspark语法对我来说并不十分清楚。

主要是以下部分对我不清楚: 来自pyspark.sql.functions导入数组,col,lit,struct

n = 3  # Same as numItems

product_labels_ = array(*[lit(x) for x in product_labels])
recommendations = array(*[struct(
    product_labels_[col("recommendations")[i]["productIdIndex"]].alias("productId"),
    col("recommendations")[i]["rating"].alias("rating")
) for i in range(n)])

recs.withColumn("recommendations", recommendations)

任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:2)

语法几乎相同:

val n = 3

val product_labels_ = array(product_labels.map(lit): _*)

val recommendations = array((0 until n).map(i => struct(
   product_labels_(col("recommendations")(i)("productIdIndex")).alias("productId"),
   col("recommendations")(i)("rating").alias("rating")
)): _*)

recs.withColumn("recommendations", recommendations)

udf可能更容易理解如果标签在整数范围内

case class Rec(label: String, rating: Double)

def translateLabels(labels: Seq[String]) = udf {
   (recs: Seq[Row]) => recs.map {
     case Row(i: Int, v: Double) => Rec(labels(i), v)
   }
}