如何为Hadoop编译我的Java程序(WordCount)

时间:2019-02-06 16:21:48

标签: java hadoop

我正在尝试编译我的第一个MapReduce程序:WordCount

这是我的WordcountMapper类:

library(stringr)
my_df_1 %>%
    group_by(col_1) %>%
    mutate(col_3 = +(any(str_detect(col_2, "x"))))

我也有WordCountReducer和WordCountDriving类。 我尝试使用以下命令编译所有内容:

package wordcount;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<LongWritable, Text, Text,     IntWritable> {

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    @Override
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();

        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }

    public void run(Context context) throws IOException, InterruptedException {
        setup(context);
        while (context.nextKeyValue()) {
            map(context.getCurrentKey(), context.getCurrentValue(), context);
        }
        cleanup(context);
    }
}

但是结果非常出乎意料:

javac -classpath $HADOOP_CLASSPATH WordCount*.java

12个错误

问题是我不明白它给我的错误,就像看不到“;”一样。我试图重写它,但没有任何变化

0 个答案:

没有答案