我有一个嵌套结构的DataFrame(最初是mapreduce作业的Avro输出)。我想把它弄平。原始DataFrame的架构如下所示(简化):
|-- key: struct
| |-- outcome: boolean
| |-- date: string
| |-- age: int
| |-- features: map
| | |-- key: string
| | |-- value: double
|-- value: struct (nullable = true)
| |-- nullString: string (nullable = true)
在Json表示中,一行数据如下所示:
{"key":
{"outcome": false,
"date": "2015-01-01",
"age" : 20,
"features": {
{"f1": 10.0,
"f2": 11.0,
...
"f100": 20.1
}
},
"value": null
}
features
地图对所有行都具有相同的结构,即键集相同(f1,f2,...,f100)。通过“展平”我的意思下列。
+----------+----------+---+----+----+-...-+------+
| outcome| date|age| f1| f2| ... | f100|
+----------+----------+---+----+----+-...-+------+
| true|2015-01-01| 20|10.0|11.0| ... | 20.1|
...
(truncated)
我使用Spark {2.1}来自https://github.com/databricks/spark-avro的spark-avro软件包。
原始数据框由
读入import com.databricks.spark.avro._
val df = spark.read.avro("path/to/my/file.avro")
// it's nested
df.show()
+--------------------+------+
| key| value|
+--------------------+------+
|[false,2015... |[null]|
|[false,2015... |[null]|
...
(truncated)
非常感谢任何帮助!
答案 0 :(得分:4)
在Spark中,您可以从嵌套的AVRO文件中提取数据。例如,您提供的JSON:
{"key":
{"outcome": false,
"date": "2015",
"features": {
{"f1": v1,
"f2": v2,
...
}
},
"value": null
}
从AVRO读完后:
import com.databricks.spark.avro._
val df = spark.read.avro("path/to/my/file.avro")
可以从嵌套的JSON提供展平数据。为此,您可以编写如下代码:
df.select("key.*").show
+----+------------+-------+
|date| features |outcome|
+----+------------+-------+
|2015| [v1,v2,...]| false|
+----+------------+-------+
...
(truncated)
df.select("key.*").printSchema
root
|-- date: string (nullable = true)
|-- features: struct (nullable = true)
| |-- f1: string (nullable = true)
| |-- f2: string (nullable = true)
| |-- ...
|-- outcome: boolean (nullable = true)
或类似的东西:
df.select("key.features.*").show
+---+---+---
| f1| f2|...
+---+---+---
| v1| v2|...
+---+---+---
...
(truncated)
df.select("key.features.*").printSchema
root
|-- f1: string (nullable = true)
|-- f2: string (nullable = true)
|-- ...
如果这是您期望的输出。