如何自动化StructType创建以将RDD传递给DataFrame

时间:2016-11-15 15:06:46

标签: scala apache-spark spark-dataframe rdd

我想将RDD保存为镶木地板文件。为此,我将RDD传递给DataFrame,然后使用结构将DataFrame保存为镶木地板文件:

    val aStruct = new StructType(Array(StructField("id",StringType,nullable = true),
                                       StructField("role",StringType,nullable = true)))
    val newDF = sqlContext.createDataFrame(filtered, aStruct)

问题是如何为所有列自动创建aStruct,假设它们都是StringType?另外,nullable = true的含义是什么?这是否意味着所有空值都将被Null替换?

1 个答案:

答案 0 :(得分:4)

为什么不使用内置的toDF

scala> val myRDD = sc.parallelize(Seq(("1", "roleA"), ("2", "roleB"), ("3", "roleC")))
myRDD: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[60] at parallelize at <console>:27

scala> val colNames = List("id", "role")
colNames: List[String] = List(id, role)

scala> val myDF = myRDD.toDF(colNames: _*)
myDF: org.apache.spark.sql.DataFrame = [id: string, role: string]

scala> myDF.show
+---+-----+
| id| role|
+---+-----+
|  1|roleA|
|  2|roleB|
|  3|roleC|
+---+-----+

scala> myDF.printSchema
root
 |-- id: string (nullable = true)
 |-- role: string (nullable = true)

scala> myDF.write.save("myDF.parquet")

nullable=true只是意味着指定的列可以包含null个值(这对于通常没有int值的null列非常有用 - Int没有NAnull)。