在AWS EMR上运行WordCount示例map reduce

时间:2015-03-15 18:16:35

标签: java hadoop amazon-web-services emr

我正在尝试在AWS EMR上运行单词计数示例,但是我很难在群集上部署和运行jar。它是一个自定义的字数例子,我使用了一些JSON解析。输入在我的S3桶中。当我尝试在EMR集群上运行我的工作时,我收到的错误是我的Mapper类中找不到main函数。在互联网上的任何地方,单词计数示例地图减少作业的代码就像他们创建的那样,三个类,一个静态映射器类扩展Mapper,然后是reducer,它扩展了Reducer,然后是包含作业配置的主类,所以我不确定为什么我会看到这个错误。我使用maven程序集插件构建我的代码,以便将所有第三方依赖项包装在我的JAR中。这是我写的代码

package com.amalwa.hadoop.MapReduce;

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import com.google.gson.Gson;

public class ETL{

    public static void main(String[] args) throws Exception{
        if (args.length < 2) {
            System.err.println("Usage: ETL <input path> <output path>");
            System.exit(-1);
        }
        Configuration conf = new Configuration();
        Job job = new Job(conf, "etl");
        job.setJarByClass(ETL.class);

        job.setMapperClass(JsonParserMapper.class);
        job.setReducerClass(JsonParserReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(TweetArray.class);
        FileInputFormat.addInputPath(job, new Path(args[1]));
        FileOutputFormat.setOutputPath(job, new Path(args[2]));
        job.waitForCompletion(true);
    }

    public static class JsonParserMapper extends Mapper<LongWritable, Text, Text, Text>{
        private Text mapperKey = null;
        private Text mapperValue = null;
        Date filterDate = getDate("Sun Apr 20 00:00:00 +0000 2014");

        public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            String jsonString = value.toString();
            if(!jsonString.isEmpty()){
                @SuppressWarnings("unchecked")
                Map<String, Object> tweetData = new Gson().fromJson(jsonString, HashMap.class);
                Date timeStamp = getDate(tweetData.get("created_at").toString());
                if(timeStamp.after(filterDate)){
                    @SuppressWarnings("unchecked")
                    com.google.gson.internal.LinkedTreeMap<String, Object> userData = (com.google.gson.internal.LinkedTreeMap<String, Object>) tweetData.get("user");
                    mapperKey = new Text(userData.get("id_str") + "~" + tweetData.get("created_at").toString());
                    mapperValue = new Text(tweetData.get("text").toString() + " tweetId = " + tweetData.get("id_str"));
                    context.write(mapperKey, mapperValue);
                }
            }
        }

        public Date getDate(String timeStamp){
            SimpleDateFormat simpleDateFormat = new SimpleDateFormat("E MMM dd HH:mm:ss Z yyyy");
            Date date = null;
            try {
                date = simpleDateFormat.parse(timeStamp);
            } catch (ParseException e) {
                e.printStackTrace();
            }
            return date;
        }
    }

    public static class JsonParserReducer extends Reducer<Text, Text, Text, TweetArray> {
        private ArrayList<Text> tweetList = new ArrayList<Text>();

        public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
            for (Text val : values) {
                tweetList.add(new Text(val.toString()));
            }
            context.write(key, new TweetArray(Text.class, tweetList.toArray(new Text[tweetList.size()])));
        }
    }
}

如果有人能澄清这个问题,那就太好了。我已经在我安装了hadoop的本地机器上部署了这个jar,它工作正常,但是当我使用AWS设置我的集群并为流作业提供所有参数时它不起作用。以下是我配置的屏幕截图:

enter image description here

The Mapper textbox is set to: java -classpath MapReduce-0.0.1-SNAPSHOT-jar-with-dependencies.jar com.amalwa.hadoop.MapReduce.JsonParserMapper
The Reducer textbox is set to: java -classpath MapReduce-0.0.1-SNAPSHOT-jar-with-dependencies.jar com.amalwa.hadoop.MapReduce.JsonParserReducer

谢谢和问候。

2 个答案:

答案 0 :(得分:3)

您需要选择自定义jar步骤而不是流式传输程序。

答案 1 :(得分:0)

制作jar文件时(我通常使用Eclipse或自定义gradle构建),检查主类是否设置为ETL。显然,默认情况下不会发生这种情况。还要检查您在ur系统上使用的Java版本。我认为aws emr适用于java 7。