使用spark-xml从pyspark数据框中选择嵌套列

时间:2018-06-15 05:47:34

标签: apache-spark hadoop pyspark apache-spark-sql

我正在尝试从Pyspark Dataframe中选择嵌套的ArrayType。

我想只选择此数据框中的项目列。我不知道我在这里做错了什么。

XML:

<?xml version="1.0" encoding="utf-8"?>
<shiporder orderid="str1234">
  <orderperson>ABC</orderperson>
  <shipto>
    <name>XYZ</name>
    <address>305, Ram CHowk</address>
    <city>Pune</city>
    <country>IN</country>
  </shipto>
  <items>
  <item>
    <title>Clothing</title>
    <notes>
        <note>Brand:CK</note>
        <note>Size:L</note>
    </notes>
    <quantity>6</quantity>
    <price>208</price>
  </item>
  </items>
</shiporder>

数据帧架构。

root
 |-- _orderid: string (nullable = true)
 |-- items: struct (nullable = true)
 |    |-- item: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- notes: struct (nullable = true)
 |    |    |    |    |-- note: array (nullable = true)
 |    |    |    |    |    |-- element: string (containsNull = true)
 |    |    |    |-- price: double (nullable = true)
 |    |    |    |-- quantity: long (nullable = true)
 |    |    |    |-- title: string (nullable = true)
 |-- orderperson: string (nullable = true)
 |-- shipto: struct (nullable = true)
 |    |-- address: string (nullable = true)
 |    |-- city: string (nullable = true)
 |    |-- country: string (nullable = true)
 |    |-- name: string (nullable = true)




df.show(truncate=False)
+--------+---------------------------------------------------------------------------------------------+-------------+-------------------------------+
|_orderid|items                                                                                        |orderperson  |shipto                         |
+--------+---------------------------------------------------------------------------------------------+-------------+-------------------------------+
|str1234 |[[[[[color:Brown, Size:12]], 82.0, 1, Footwear], [[[Brand:CK, Size:L]], 208.0, 6, Clothing]]]|Vikrant Chand|[305, Giotto, Irvine, US, Amit]|
+--------+---------------------------------------------------------------------------------------------+-------------+-------------------------------+

当我选择items列时,它返回null。

df.select([ 'items']).show()
+-----+
|items|
+-----+
| null|
+-----+

选择与shipto相同的列(其他嵌套列)解决问题。

df.select([ 'items','shipto']).show()
+--------------------+--------------------+
|               items|              shipto|
+--------------------+--------------------+
|[[[[[color:Brown,...|[305, Giotto, Irv...|
+--------------------+--------------------+

1 个答案:

答案 0 :(得分:1)

这是spark-xml中的一个错误,它在0.4.1

中得到修复