在编写包含Vector的Dataframe时,架构会发生更改

时间:2018-03-22 15:45:39

标签: scala apache-spark

我正在编写Spark数据帧,其中一列是Vector数据类型为ORC。当我加载数据帧时,架构会发生变化。

var df : DataFrame = spark.createDataFrame(Seq(
  (1.0, Vectors.dense(0.0, 1.1, 0.1)),
  (0.0, Vectors.dense(2.0, 1.0, -1.0)),
  (0.0, Vectors.dense(2.0, 1.3, 1.0)),
  (1.0, Vectors.dense(0.0, 1.2, -0.5))
)).toDF("label", "features")

df.printSchema

df.write.mode(SaveMode.Overwrite).orc("/some/path")
val newDF = spark.read.orc("/some/path")

newDF.printSchema

df.printSchema的输出是

|-- label: double (nullable = false)
|-- features: vector (nullable = true)

newDF.printSchema的输出是

|-- label: double (nullable = true)
|-- features: struct (nullable = true)
|    |-- type: byte (nullable = true)
|    |-- size: integer (nullable = true)
|    |-- indices: array (nullable = true)
|    |    |-- element: integer (containsNull = true)
|    |-- values: array (nullable = true)
|    |    |-- element: double (containsNull = true)

这是什么问题?我使用Spark 2.2.0和Scala 2.11.8

0 个答案:

没有答案