如何使用基于案例类的数据集解析CSV?

时间:2016-02-15 11:23:03

标签: scala apache-spark

我正在尝试使用新的Spark 1.6.0 API数据集解析CSV。无论如何,我有一些问题要做。我想为每个CSV行创建一个case class

这是代码:

case class MyData (forename:String, surname:String, age:Integer)

    def toMyData(text: String): Dataset[MyData] = {
      val splits: Array[String] = text.split("\t")
      Seq(MyData(
        forename = splits(0),
        surname = splits(1),
        age = splits(2).asInstanceOf[Integer]
      )).toDS()
    }

    val lines:Dataset[MyData] = sqlContext.read.text("/data/mydata.csv").as[MyData]
    lines.map(r => toMyData(r)).foreach(println)

我的toMyData只是Encoder的一种,但我不知道如何正确遵循API。

有什么想法吗?

修改

我已经用这种方式更改了代码,但我甚至无法编译:

val lines:Dataset[MyData] = sqlContext.read.text("/data/mydata.csv").as[MyData]
    lines.map(r => toMyData(r)).foreach(println)

def toMyData(text: String): Dataset[MyData] = {
      val df = sc.parallelize(Seq(text)).toDF("value")

      df.map(_.getString(0).split("\t") match {
        case Array(fn, sn, age) =>
          MyData(fn, sn, age.asInstanceOf[Integer])
      }).toDS

    }

    sqlContext.read.text("/data/mydata.csv").as[String].map(r => toMyData(r)).collect().foreach(println)

我得到了:

Error:(50, 10) value toDS is not a member of org.apache.spark.rdd.RDD[MyData]
possible cause: maybe a semicolon is missing before `value toDS'?
      }).toDS
         ^
Error:(54, 133) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._  Support for serializing other types will be added in future releases.
    sqlContext.read.text("/data/mydata.csv").as[String].map(r => toMyData(r)).collect().foreach(println)

1 个答案:

答案 0 :(得分:3)

忽略格式验证和异常处理:

//  Simulate sqlContext.read.text("/data/mydata.csv")
val df = sc.parallelize(Seq("John\tDoe\t22")).toDF("value")

df.rdd.map(_.getString(0).split("\t") match {
  case Array(fn, sn, age) => MyData(fn, sn, age.toInt)
}).toDS

或不转换为RDD:

import org.apache.spark.sql.functions.regexp_extract

val pattern = "^(.*?)\t(.*?)\t(.*)$"
val exprs = Seq(
  (1, "forename", "string"), (2, "surname", "string"), (3, "age", "integer")
).map{case (i, n, t) => regexp_extract($"value", pattern, i).alias(n).cast(t)}

df
  .select(exprs: _*)  // Convert to (StringType, StringType, IntegerType)
  .as[MyData]  // cast

要点:

  • 请勿使用嵌套操作,转换或DDS。
  • 在使用之前了解asInstanceOf的工作原理。这里不适用。