DataFrame:将列内的数组转换为RDD [Array [String]]

时间:2017-01-29 14:37:21

标签: apache-spark dataframe apache-spark-sql

给定数据框:

+---+----------+
|key|     value|
+---+----------+
|foo|       bar|
|bar|  one, two|
+---+----------+

然后我想使用value列作为FPGrowth的入口,它必须看起来像RDD[Array[String]]

val transactions: RDD[Array[String]] = df.select("value").rdd.map(x => x.getList(0).toArray.map(_.toString))

import org.apache.spark.mllib.fpm.{FPGrowth, FPGrowthModel}
val fpg = new FPGrowth().setMinSupport(0.01)
val model = fpg.run(transactions)

我得到例外:

  org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 141.0 failed 1 times, most recent failure: Lost task 7.0 in stage 141.0 (TID 2232, localhost): java.lang.ClassCastException: java.lang.String cannot be cast to scala.collection.Seq

欢迎任何建议!

1 个答案:

答案 0 :(得分:3)

而不是 val transactions: RDD[Array[String]] = df.select("value").rdd.map(x => x.getList(0).toArray.map(_.toString))

尝试使用 val transactions= df.select("value").rdd.map(_.toString.stripPrefix("[").stripSuffix("]").split(","))

它提供了预期的ouptut,即RDD[Array[String]]

val transactions= df.select("value").rdd.map(_.toString.stripPrefix("[").stripSuffix("]").split(","))
transactions: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[10] at map at <console>:33
scala> transactions.take(2)
res21: Array[Array[String]] = Array(Array(bar), Array(one, two))

删除&#34; [&#34;和&#34;]&#34; ,可以在stripPrefix函数之前使用stripSuffixsplit函数。