使用Spark Streaming将rdd保存到Hbase时出现java.io.NotSerializableException

时间:2017-10-16 11:11:23

标签: scala apache-spark hbase

当我用spark处理数据时,java.io.NotSerializableException让我很烦恼。

val hbase_conf = HBaseConfiguration.create()
hbase_conf.set("hbase.zookeeper.property.clientPort", "2181")
hbase_conf.set("hbase.zookeeper.quorum", "hadoop-zk0.s.qima-inc.com,hadoop-zk1.s.qima-inc.com,hadoop-zk2.s.qima-inc.com")
val newAPIJobConfiguration = Job.getInstance(hbase_conf);
newAPIJobConfiguration.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "mytest_table");
newAPIJobConfiguration.setOutputFormatClass(classOf[org.apache.hadoop.hbase.mapreduce.TableOutputFormat[ImmutableBytesWritable]])
newAPIJobConfiguration.getConfiguration().set("mapreduce.output.fileoutputformat.outputdir", "/tmp")
mydata.foreachRDD( rdd => {
  val json_rdd = rdd.map(Json.parse _ ).map(_.validate[Scan])
    .map(Scan.transformScanRestult _)
    .filter(_.nonEmpty)
    .map(_.get)
    .map(Scan.convertForHbase _ )
  json_rdd.saveAsNewAPIHadoopDataset(newAPIJobConfiguration.getConfiguration)
})

然而,它导致java.io.NotSerializableException失败,并且跟随是错误信息

17/10/16 18:56:50 ERROR Utils: Exception encountered
        java.io.NotSerializableException: org.apache.hadoop.mapreduce.Job
        at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
        at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
        at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
        at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
        at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
        at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)

所以我改变我的代码如下

object mytest_config{
    val hbase_conf = HBaseConfiguration.create()
    hbase_conf.set("hbase.zookeeper.property.clientPort", "2181")
    hbase_conf.set("hbase.zookeeper.quorum", "zk1,zk2")
    val newAPIJobConfiguration = Job.getInstance(hbase_conf);
    newAPIJobConfiguration.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "mytest_table");
    newAPIJobConfiguration.setOutputFormatClass(classOf[org.apache.hadoop.hbase.mapreduce.TableOutputFormat[ImmutableBytesWritable]])
    newAPIJobConfiguration.getConfiguration().set("mapreduce.output.fileoutputformat.outputdir", "/tmp")
  }

mydata.foreachRDD( rdd => {
      val json_rdd = rdd.map(Json.parse _ )
        .map(_.validate[Scan])
        .map(Scan.transformScanRestult _)
        .filter(_.nonEmpty)
        .map(_.get)
        .map(Scan.convertForHbase _ )

     json_rdd.saveAsNewAPIHadoopDataset(mytest_config.newAPIJobConfiguration.getConfiguration)
})

这可行! 有人对这项工作有什么想法,以及正式推荐的方式是什么?

1 个答案:

答案 0 :(得分:2)

该错误是由于

newAPIJobConfiguration已在驱动程序中初始化

val newAPIJobConfiguration = Job.getInstance(hbase_conf);

它使用内部工作人员(foreach

json_rdd.saveAsNewAPIHadoopDataset(newAPIJobConfiguration.getConfiguration)