尝试在Spark中以avro格式保存文件时获取ClassCastException

时间:2018-12-11 04:21:16

标签: scala apache-spark avro

我正在尝试处理文件,然后使用spark中的saveAsNewAPIHadoopFile方法将其保存为avro文件格式。下面是我的程序:

案例类TrafficSchema(a:字符串,b:整数,c:整数,d:整数,e:浮点数)

 def main(args: Array[String]) {
val tableName: String = "CHICAGO_TRAFFIC_TRACKER"
System.setProperty("hadoop.home.dir", "D:\\")
val input_Path = "E:\\SharedVM\\Chicago_Traffic_Tracker_-_Historical_Congestion_Estimates_by_Region_-_2013-2018.csv"
val job = Job.getInstance
val schema = Schema.create(Schema.Type.STRING)
AvroJob.setOutputKeySchema(job, schema)
val avroOutputPath = "D:\\ScalaIDe_latest\\ChicagoTratfficTracker\\output\\Chicago_Traffic_Tracker-2013-2018_AVRO"
val conf = new SparkConf().setMaster("local").setAppName("TrafficTracker").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  .set("spark.kryo.registrator", "com.jesperdj.example.MyRegistrator")
val sc = new SparkContext(conf)

val data = sc.textFile(input_Path)

val header = data.first()
val trackerData = data.filter(head => head != header).map(row => row.toString().trim().split(","))
  .map(f => {
    //println(f(0).toString()+"-"+f(1).toString().toInt+"-"+ f(2).toString().toInt+"-"+ f(3).toString().toInt+"-"+ f(4).toString())
    TrafficSchema(f(0).toString(), f(1).toString().toInt, f(2).toString().toInt, f(3).toString().toInt, f(4).toString().toFloat)
  })

val intermediateRDD = trackerData.mapPartitions(
  f = (iter: Iterator[TrafficSchema]) => iter.map(new AvroKey(_) -> NullWritable.get()))
intermediateRDD.saveAsNewAPIHadoopFile(
  avroOutputPath,
  classOf[AvroKey[TrafficSchema]],
  classOf[NullWritable],
  classOf[AvroKeyOutputFormat[GenericRecord]],
      job.getConfiguration)
  }

以下是我遇到的错误:

org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.ClassCastException: com.jesperdj.example.TrafficSchema cannot be cast to java.lang.CharSequence
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
at org.apache.avro.mapreduce.AvroKeyRecordWriter.write(AvroKeyRecordWriter.java:77)
at org.apache.avro.mapreduce.AvroKeyRecordWriter.write(AvroKeyRecordWriter.java:39)
at 

org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1108)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1106)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1106)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1114)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1085)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:254)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassCastException: com.jesperdj.example.TrafficSchema cannot be cast to java.lang.CharSequence
    at org.apache.avro.generic.GenericDatumWriter.writeString(GenericDatumWriter.java:267)
    at org.apache.avro.specific.SpecificDatumWriter.writeString(SpecificDatumWriter.java:71)
    at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:128)
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75)
    at org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:159)
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62)
    at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
    ... 14 more

我不确定应该在schema.create方法中定义什么。我尝试使用记录,但不确定如何在Schema.create方法中定义流量模式

示例CSV数据:

CSV Data

1 个答案:

答案 0 :(得分:1)

实际上,您必须定义一个avro模式,或使用外部库(例如avro4s)才能从您的案例类中获取它。

使用本地Avro:

val schema = "{\"type\":\"record\",\"name\":\"TrafficSchema\",\"namespace\":\"your.project.namespace\",\"fields\":[{\"name\":\"str\",\"type\":\"string\"},{\"name\":\"i\",\"type\":\"int\"},{\"name\":\"i1\",\"type\":\"int\"},{\"name\":\"i2\",\"type\":\"int\"},{\"name\":\"fl\",\"type\":\"float\"}]}"

val trafficSchema = new Schema.Parser().parse(schema)

使用avro4s

import com.sksamuel.avro4s.AvroSchema

val trafficSchema: Schema = AvroSchema[TrafficSchema]