如何在Apache Spark中解析包含相同节点列表的xml文件?
文件示例:
<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="CGImap 0.4.0 (25361 thorn-02.openstreetmap.org)" copyright="OpenStreetMap and contributors" attribution="http://www.openstreetmap.org/copyright" license="http://opendatacommons.org/licenses/odbl/1-0/">
<bounds minlat="48.8306100" minlon="2.3310900" maxlat="48.8337900" maxlon="2.3389100"/>
<node id="430785" visible="true" version="8" changeset="24482318" timestamp="2014-08-01T14:24:53Z" user="dhuyp" uid="1779584" lat="48.8340725" lon="2.3309196"/>
<node id="661209" visible="true" version="6" changeset="9914127" timestamp="2011-11-22T21:46:44Z" user="lapinos03" uid="33634" lat="48.8337517" lon="2.3333992"/>
<node id="24912996" visible="true" version="2" changeset="806076" timestamp="2009-03-14T10:38:25Z" user="Goon" uid="24657" lat="48.8302268" lon="2.3338015">
<tag k="crossing" v="uncontrolled"/>
<tag k="highway" v="traffic_signals"/>
</node>
<node id="24912994" visible="true" version="5" changeset="5904801" timestamp="2010-09-28T15:32:01Z" user="maouth-" uid="322872" lat="48.8301333" lon="2.3309869">
<tag k="highway" v="mini_roundabout"/>
</node>
</osm>
答案 0 :(得分:2)
正如另一个答案中所提到的,来自Databricks的spark-xml是一种读取XML的方法,但there is currently a bug in spark-xml会阻止您导入自闭元素。要解决此问题,您可以将整个XML作为单个值导入,然后执行以下操作:
first
答案 1 :(得分:0)
使用https://github.com/databricks/spark-xml
val df = sqlContext.read
.format("com.databricks.spark.xml")
.option("rowTag", "result")
.load(pathTOyourDATA)