。的TaskID <初始化>(Lorg /阿帕奇/ hadoop的/映射精简/作业ID; Lorg /阿帕奇/ hadoop的/映射精简/任务类型;我)V

时间:2016-12-09 08:17:23

标签: hadoop apache-spark mapreduce

val jobConf = new JobConf(hbaseConf)  
jobConf.setOutputFormat(classOf[TableOutputFormat])  
jobConf.set(TableOutputFormat.OUTPUT_TABLE, tablename)  

val indataRDD = sc.makeRDD(Array("1,jack,15","2,Lily,16","3,mike,16"))  

indataRDD.map(_.split(','))   
val rdd = indataRDD.map(_.split(',')).map{arr=>{  
val put = new Put(Bytes.toBytes(arr(0).toInt))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("name"),Bytes.toBytes(arr(1)))  
put.add(Bytes.toBytes("cf"),Bytes.toBytes("age"),Bytes.toBytes(arr(2).toInt))  
(new ImmutableBytesWritable, put)   
}}  
  rdd.saveAsHadoopDataset(jobConf)  

当我运行hadoop或spark作业时,我经常遇到错误:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.TaskID.<init>(Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/mapreduce/TaskType;I)V
at org.apache.spark.SparkHadoopWriter.setIDs(SparkHadoopWriter.scala:158)
at org.apache.spark.SparkHadoopWriter.preSetup(SparkHadoopWriter.scala:60)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1188)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161)
at com.iteblog.App$.main(App.scala:62)
at com.iteblog.App.main(App.scala)`

在开始时,我想,是罐子冲突,但我仔细检查了罐子:没有其他罐子。 spark和hadoop版本是:

<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.1</version>`

<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>2.6.0-mr1-cdh5.5.0</version>

我发现TaskID和TaskType都在 hadoop-core jar中,但不在同一个包中。为什么mapred.TaskID可以引用mapreduce.TaskType?

2 个答案:

答案 0 :(得分:1)

哦,我已经解决了这个问题,添加了maven依赖

 <dependency>
   <groupId>org.apache.hadoop</groupId>
   <artifactId>hadoop-mapreduce-client-core</artifactId>
   <version>2.6.0-cdh5.5.0</version>
</dependency>

错误消失了!

答案 1 :(得分:0)

我也遇到过这样的问题。它主要是由于jar问题。

thread "main" java.lang.NoSuchMethodError:

从Maven spark-core_2.10

添加Jar文件
 <dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>2.0.2</version>
 </dependency>

更改Jar文件后 After change