如何使用Java读取AWS的Common Crawl的所有数据?

时间:2015-07-08 08:57:51

标签: java hadoop amazon-s3 mapreduce common-crawl

我对Hadoop和MapReduce编程完全陌生,我试图用Common Crawl的数据编写我的第一个MapReduce程序。

我想从AWS阅读2015年4月的所有数据。 例如,如果我想在命令行中下载2015年4月的所有数据,我会这样做:

  

s3cmd获取s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/wat/*.warc.wat.gz

此命令行工作,但我不想下载2015年4月的所有数据,我只想阅读所有" warc.wat.gz"文件(为了分析数据)。

我尝试创建我的工作,看起来像这样:

public class FirstJob extends Configured implements Tool {
    private static final Logger LOG = Logger.getLogger(FirstJob.class);

    /**
     * Main entry point that uses the {@link ToolRunner} class to run the Hadoop
     * job.
     */
    public static void main(String[] args) throws Exception {
        int res = ToolRunner.run(new Configuration(), new FirstJob(), args);
        System.out.println("done !!");
        System.exit(res);
    }

    /**
     * Builds and runs the Hadoop job.
     * 
     * @return 0 if the Hadoop job completes successfully and 1 otherwise.
     */
    public int run(String[] arg0) throws Exception {
        Configuration conf = getConf();
        //
        Job job = new Job(conf);
        job.setJarByClass(FirstJob.class);
        job.setNumReduceTasks(1);

        //String inputPath = "data/*.warc.wat.gz";
        String inputPath = "s3n://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/wat/*.warc.wat.gz";
        LOG.info("Input path: " + inputPath);
        FileInputFormat.addInputPath(job, new Path(inputPath));

        String outputPath = "/tmp/cc-firstjob/";
        FileSystem fs = FileSystem.newInstance(conf);
        if (fs.exists(new Path(outputPath))) {
            fs.delete(new Path(outputPath), true);
        }
        FileOutputFormat.setOutputPath(job, new Path(outputPath));

        job.setInputFormatClass(WARCFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        job.setMapperClass(FirstJobUrlTypeMap.ServerMapper.class);
        job.setReducerClass(LongSumReducer.class);

        if (job.waitForCompletion(true)) {
            return 0;
        } else {
            return 1;
        }
    }

但我有这个错误:

  

线程中的异常" main" java.lang.IllegalArgumentException:必须将AWS Access Key ID和Secret Access Key指定为s3n URL的用户名或密码,或者分别设置fs.s3n.awsAccessKeyId或fs.s3n.awsSecretAccessKey属性。

我如何解决我的问题? 提前谢谢,

2 个答案:

答案 0 :(得分:1)

您可以尝试this github project.

答案 1 :(得分:0)

我解决了我的问题。 在代码中,更改:

 Configuration conf = getConf();
 //
 Job job = new Job(conf);

Configuration conf = new Configuration();
conf.set("fs.s3n.awsAccessKeyId", "your_key");
conf.set("fs.s3n.awsSecretAccessKey", "your_key");
Job job = new Job(conf);