Hadoop作业在java.lang.ClassNotFoundException上失败

时间:2013-06-09 14:41:07

标签: scala hadoop classnotfoundexception

我搜索过这个问题的解决方案无济于事。我有三个scala类:MaxTemperatureMapperMaxTemperatureReducerMaxTemperatureDriver(请参阅下面的实现)。之前线程中的某些人删除了Main类(job.setJar())中的MaxTemperatureDriver方法,以使hadoop作业运行。这对我不起作用。运行时,我继续收到以下stacktrace:

  1. hadoop com.koadr.hadoop.MaxTemperatureDriver micro/sample.txt output

  2. hadoop jar target/classes/koadr-hadoop-1.0-SNAPSHOT.jar com.koadr.hadoop.MaxTemperatureDriver micro/sample.txt output

  3. java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.koadr.hadoop.MaxTemperatureMapper not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1587)
    at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:191)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:631)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
    Caused by: java.lang.ClassNotFoundException: Class com.koadr.hadoop.MaxTemperatureMapper not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1493)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1585)
    ... 8 more
    

    我正在使用intellij IDE并使用maven进行依赖,编译等。我在命令行上运行前面提到的hadoop命令。有人可以解释我哪里出错吗?为什么没有找到Mapper类?

    MaxTemperatureMapper:

    class MaxTemperatureMapper extends Mapper[Object, Text, Text, IntWritable] {
    
      private def missing( temp : String) : Boolean = {
        temp.equals("+9999")
      }
    
     override def map(key: Object, value: Text, context:Mapper[Object,Text,Text,IntWritable]#Context ) = {
        val line : String = value.toString
        val year : String = line.substring(15, 19)
        val temp : String = line.substring(87,92)
    
    
        if (!missing(temp)) {
            val airTemp : Int = Integer.parseInt(line.substring(87,92))
            context.write(new Text(year), new IntWritable(airTemp))
        }
      }
    }
    

    MaxTemperatureReducer:

    class MaxTemperatureReducer extends Reducer[Text, IntWritable, Text, IntWritable] {
    
      override
      def reduce (key: Text, values: java.lang.Iterable[IntWritable], context:Reducer[Text, IntWritable, Text, IntWritable]#Context) = {
        def maxVal(values : List[IntWritable], cMaxV : Int) : Int = {
          if (values.isEmpty) cMaxV
          else maxVal(values.tail,math.max(cMaxV, values.head.get()))
        }
        context write(new Text(key), new IntWritable(maxVal(values.toList,Integer.MIN_VALUE)) )
      }
    
    }
    

    MaxTemperatureDriver:

    class MaxTemperatureDriver extends Configured with Tool {
      override
      def run(args : Array[String]) : Int = {
        if (args.length != 2) {
          System.err.printf("Usage: %s [generic options] <input> <output>\n", getClass.getSimpleName)
          ToolRunner.printGenericCommandUsage(System.err)
         -1
        }
       val job : Job = Job.getInstance(getConf, "Max Temperature")
       job.setJarByClass(getClass)
    
       FileInputFormat.addInputPath(job, new Path(args(0)))
       FileOutputFormat.setOutputPath(job, new Path(args(1)))
    
       job.setMapperClass(classOf[MaxTemperatureMapper])
       job.setCombinerClass(classOf[MaxTemperatureReducer])
       job.setReducerClass(classOf[MaxTemperatureReducer])
    
       job.setOutputKeyClass(classOf[Text])
       job.setOutputValueClass(classOf[IntWritable])
    
       if (job.waitForCompletion(true))  0 else 1
    
       }
    
    
     }
    
    object MaxTemperatureDriver {
      def main(args : Array[String]) = {
      val exitCode : Int = ToolRunner.run(new MaxTemperatureDriver, args)
      System.exit(exitCode)
    
      }
    }  
    

1 个答案:

答案 0 :(得分:4)

在提交作业时,您需要向HADOOP_CLASSPATH-libjars添加任何依赖项,如下例所示:

使用以下命令添加(例如)current和lib目录中的所有jar依赖项:

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`echo *.jar`:`echo lib/*.jar | sed 's/ /:/g'`

请记住,通过hadoop jar开始工作时,您还需要使用-libjars将任何依赖项的jar传递给它。我喜欢用:

hadoop jar <jar> <class> -libjars `echo ./lib/*.jar | sed 's/ /,/g'` [args...]

注意: sed命令需要不同的分隔符; HADOOP_CLASSPATH:分开且-libjars需要,分开。