无法将RDD [Row]转换为DataFrame

时间:2017-01-26 08:19:32

标签: scala apache-spark apache-spark-sql

对于以下代码 - 将DataFrame转换为RDD[Row],并通过mapPartitions追加新列的数据:

 // df is a DataFrame
val dfRdd = df.rdd.mapPartitions {
  val bfMap = df.rdd.sparkContext.broadcast(factorsMap)
  iter =>
    val locMap = bfMap.value
    iter.map { r =>
      val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
      Row(newseq)
    }
}

另一列RDD[Row]的输出正确无误:

println("**dfrdd\n" + dfRdd.take(5).mkString("\n"))

**dfrdd
[ArrayBuffer(0021BEC286CC, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 148818)]
[ArrayBuffer(0021BEE7C556, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 26908)]
[ArrayBuffer(8C7F3BFD4B82, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 99942)]
[ArrayBuffer(0021BEC8F8B8, 1, Series, series, 0d2debc63efa3790a444c7959249712b, livetv, 53994)]
[ArrayBuffer(10EA59F10C8B, 1, Series, series, 0d2debc63efa3790a444c7959249712b, livetv, 1427)]

让我们尝试将RDD[Row]转换回DataFrame:

val newSchema = df.schema.add(StructField("userf",IntegerType))

现在让我们创建更新的DataFrame:

val df2 = df.sqlContext.createDataFrame(dfRdd,newSchema)

新架构看起来是否正确?

newSchema.show()

root
 |-- user: string (nullable = true)
 |-- score: long (nullable = true)
 |-- programType: string (nullable = true)
 |-- source: string (nullable = true)
 |-- item: string (nullable = true)
 |-- playType: string (nullable = true)
 |-- userf: integer (nullable = true)

请注意,我们确实看到了新的userf列..

然而它不起作用:

println("df2: " + df2.take(1))

Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, 
most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): java.lang.RuntimeException: Error while encoding: 

java.lang.RuntimeException: scala.collection.mutable.ArrayBuffer is not a  
 valid external type for schema of string
if (assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, user), StringType), true) AS user#28
+- if (assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, user), StringType), true)
   :- assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt
   :  :- assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object)
   :  :  +- input[0, org.apache.spark.sql.Row, true]
   :  +- 0
   :- null

那么:这里缺少什么细节?

注意:我不同的方法感兴趣:例如withColumnDatasets ..让我们只考虑方法:

  • 转换为RDD
  • 向每行添加新数据元素
  • 更新新列的架构
  • 将新的RDD +架构转换回DataFrame

1 个答案:

答案 0 :(得分:4)

调用Row的构造函数似乎有一个小错误:

val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
Row(newseq)

这个“构造函数”的签名(实际上是apply方法)是:

def apply(values: Any*): Row

当您传递Seq[Any]时,会将其视为Seq[Any]类型的单一值。您想传递此序列的元素,因此您应该使用:

val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
Row(newseq: _*)

修复此问题后,Rows将与您构建的架构相匹配,您将获得预期的结果。