在pyspark数据帧中访问嵌套列

时间:2017-02-15 03:36:22

标签: apache-spark dataframe pyspark

我有一个xml文档,如下所示:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Position>
    <Search>
        <Location>
            <Region>OH</Region>
            <Country>us</Country>
            <Longitude>-816071</Longitude>
            <Latitude>415051</Latitude>
        </Location>
    </Search>
</Position>

我把它读成数据帧:

df = sqlContext.read.format('com.databricks.spark.xml').options(rowTag='Position').load('1.xml')

我可以看到1栏:

df.columns
['Search']

print df.select("Search")
DataFrame[Search: struct<Location:struct<Country:string,Latitude:bigint,Longitude:bigint,Region:string>>]

如何访问嵌套列。 ex Location.Region?

1 个答案:

答案 0 :(得分:7)

您可以执行以下操作:

df.select("Search.Location.*").show()

输出:

+-------+--------+---------+------+
|Country|Latitude|Longitude|Region|
+-------+--------+---------+------+
|     us|  415051|  -816071|    OH|
+-------+--------+---------+------+