在Hadoop实践中遇到错误

时间:2015-06-06 01:16:32

标签: hadoop mapreduce

尝试运行一个Hadoop Map Reduce代码,但低于错误。不知道为什么......

  

hadoop jar BWC11.jar WordCountDriver   “/家/培训/ training_material /数据/莎士比亚/喜剧片”   “/家/培训/ training_material /数据/莎士比亚/ AWL”       警告:不推荐使用$ HADOOP_HOME。

Exception in thread "main" java.lang.NoClassDefFoundError: WordCountDriver (wrong name:
     

COM /菲利克斯/ hadoop的/训练/ WordCountDriver)         at java.lang.ClassLoader.defineClass1(Native Method)         at java.lang.ClassLoader.defineClass(ClassLoader.java:791)         at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)         在java.net.URLClassLoader.access $ 100(URLClassLoader.java:71)         在java.net.URLClassLoader $ 1.run(URLClassLoader.java:361)         在java.net.URLClassLoader $ 1.run(URLClassLoader.java:355)         at java.security.AccessController.doPrivileged(Native Method)         在java.net.URLClassLoader.findClass(URLClassLoader.java:354)         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)         at sun.misc.Launcher $ AppClassLoader.loadClass(Launcher.java:308)         at java.lang.ClassLoader.loadClass(ClassLoader.java:410)         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)         at java.lang.Class.forName0(Native Method)         at java.lang.Class.forName(Class.java:264)         在org.apache.hadoop.util.RunJar.main(RunJar.java:149)       [training @ localhost BasicWordCount] $

有人可以帮我解决这个问题吗?

驱动程序代码:

package com.felix.hadoop.training;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;


public class WordCountDriver extends Configured implements Tool{

    public static void main(String[] args) throws Exception
    {
        ToolRunner.run(new WordCountDriver(),args);
    }

    @Override
    public int run(String[] args) throws Exception {

        Job job = new Job(getConf(),"Basic Word Count Job");
        job.setJarByClass(WordCountDriver.class);

        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        job.setInputFormatClass(TextInputFormat.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        job.setNumReduceTasks(1);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);



        return 0;
    }


}

映射器代码:

package com.felix.hadoop.training;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/**
 * 
 * @author training
 * Class : WordCountMapper
 *
 */

public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
    /**
     * Optimization: Instead of creating the variables in the 
     */

    @Override
    public void map(LongWritable inputKey,Text inputVal,Context context) throws IOException,InterruptedException
    {
        String line = inputVal.toString();
        String[] splits = line.trim().split("\\W+");
        for(String outputKey:splits)
        {
            context.write(new Text(outputKey), new IntWritable(1));

        }


    }

}

减速机代码:

package com.felix.hadoop.training;
import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;


public class WordCountReducer extends Reducer<Text,IntWritable,Text, IntWritable>{

    @Override
    public void reduce(Text key,Iterable<IntWritable> listOfValues,Context context) throws IOException,InterruptedException
    {
        int sum=0;
        for(IntWritable val:listOfValues)
        {
            sum = sum + val.get();
        }
        context.write(key,new IntWritable(sum));


    }

}

不确定我为什么会收到这个错误.. 我试图将class path复制的类文件添加到.jar文件所在的位置等...但无济于事。

1 个答案:

答案 0 :(得分:0)

添加包名称&#34; com.felix.hadoop.training&#34;在"WordCountDriver"之前。