将地图数据列表写入csv

时间:2019-03-24 02:05:55

标签: scala apache-spark scala-xml

val rdd = df.rdd.map(line => Row.fromSeq((
        scala.xml.XML.loadString("<?xml version='1.0' encoding='utf-8'?>" + line(1)).child
        .filter(elem =>
               elem.label == "name1" 
            || elem.label == "name2" 
            || elem.label == "name3"  
            || elem.label == "name4" 

        ).map(elem => (elem.label -> elem.text)).toList)
    )

我做rdd.take(10).foreach(println),我是RDD[Row],然后产生如下输出:

[(name1, value1), (name2, value2),(name3, value3)]
[(name1, value11), (name2, value22),(name3, value33)]
[(name1, value111), (name2, value222),(name4, value44)]

我想使用(name1..name4是csv的标头)将其保存到csv中,任何人都请帮助我如何使用apache spark 2.4.0

name1    | name2     | name3    | name4
value1   | value2    |value3    | null
value11  | value22   |value33   | null
value111 | value222  |null      | value444

1 个答案:

答案 0 :(得分:2)

我调整了您的示例,并添加了一些中间值来帮助您完成每一步:

  // define the labels you want:
  val labels = Seq("name1", "name2", "name3", "name4")
  val result: RDD[Row] = rdd.map { line =>
    // your raw data
    val tuples: immutable.Seq[(String, String)] = 
      scala.xml.XML.loadString("<?xml version='1.0' encoding='utf-8'?>" + line(1)).child
      .filter(elem => labels.contains(elem.label)) // you can use the label list to filter
      .map(elem => (elem.label -> elem.text)).toList // no change here
    val values: Seq[String] = 
    labels.map(l =>
      // take the values you have a label 
      tuples.find{case (k, v) => k == l}.map(_._2)
      // or just add an empty String
        .getOrElse(""))
    // create a Row
    Row.fromSeq(values)
  }

现在我不确定-但从本质上讲,您必须在行的第一行插入标题:

[name1, name2, name3]