wordcount程序中的NoClassDefFoundError

时间:2013-06-11 14:33:50

标签: java hadoop

我正在运行hadoop wordcount程序。但它给我的错误如“NoClassDefFoundError”

运行命令:

 hadoop -jar /home/user/Pradeep/sample.jar hdp_java.WordCount /user/hduser/ana.txt /user/hduser/prout
  Exception in thread "main" java.lang.NoClassDefFoundError: WordCount
   Caused by: java.lang.ClassNotFoundException: WordCount
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: WordCount. Program will exit.

我在eclipse中创建了程序,然后导出为jar文件

Eclipse代码:

package hdp_java;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WordCount {

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
   private final static IntWritable one = new IntWritable(1);
  private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        context.write(word, one);
    }
}
 }

  public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context context) 
  throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
        sum += val.get();
    }
    context.write(key, new IntWritable(sum));
}
  }

    public static void main(String[] args) throws Exception {
       Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.waitForCompletion(true);
}

}

谁能告诉我哪里错了?

2 个答案:

答案 0 :(得分:2)

您需要告诉hadoop作业使用哪个jar:

job.setJarByClass(WordCount.class);

在提交作业时,请务必向HADOOP_CLASSPATH-libjars添加任何依赖项,如下例所示:

使用以下命令添加(例如)current和lib目录中的所有jar依赖项:

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`echo *.jar`:`echo lib/*.jar | sed 's/ /:/g'`

请记住,通过hadoop jar开始工作时,您还需要使用-libjars将任何依赖项的jar传递给它。我喜欢用:

hadoop jar <jar> <class> -libjars `echo ./lib/*.jar | sed 's/ /,/g'` [args...]

注意: sed命令需要不同的分隔符; HADOOP_CLASSPATH:分开且-libjars需要,分开。

答案 1 :(得分:1)

在您的代码中添加以下行:

job.setJarByClass(WordCount.class);

如果它仍然不起作用将此作业导出为jar并将其作为外部jar添加到自身,并查看它是否有效。