如何将这个旧的api mapreduce工作代码转换为新的mapreduce

时间:2015-05-31 22:05:46

标签: java http hadoop mapreduce

以下代码来自Alex Holmes Hadoop In Practice Ver - 2: 链接:https://github.com/alexholmes/hiped2/tree/master/src/main/java/hip/ch5/http

此mapreduce代码的映射器从文本文件中读取URL列表,发送HTTP请求并将正文内容存储到文本文件中。

然而,这段代码是基于旧的mapreduce api编写的,我想转换为新版本的mapreduce api。将JobConf更改为Job + Configuration并扩展新的Mapper可能很简单,但由于某些原因我无法使用它来处理我的代码。

我宁愿等待发布修改后的代码以避免混淆,但原始代码如下所述:

映射器代码:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URL;
import java.net.URLConnection;

public final class HttpDownloadMap
    implements Mapper<LongWritable, Text, Text, Text> {
  private int file = 0;
  private Configuration conf;
  private String jobOutputDir;
  private String taskId;
  private int connTimeoutMillis =
      DEFAULT_CONNECTION_TIMEOUT_MILLIS;
  private int readTimeoutMillis = DEFAULT_READ_TIMEOUT_MILLIS;
  private final static int DEFAULT_CONNECTION_TIMEOUT_MILLIS = 5000;
  private final static int DEFAULT_READ_TIMEOUT_MILLIS = 5000;

  public static final String CONN_TIMEOUT =
      "httpdownload.connect.timeout.millis";

  public static final String READ_TIMEOUT =
      "httpdownload.read.timeout.millis";

  @Override
  public void configure(JobConf job) {
    conf = job;
    jobOutputDir = job.get("mapred.output.dir");
    taskId = conf.get("mapred.task.id");

    if (conf.get(CONN_TIMEOUT) != null) {
      connTimeoutMillis = Integer.valueOf(conf.get(CONN_TIMEOUT));
    }
    if (conf.get(READ_TIMEOUT) != null) {
      readTimeoutMillis = Integer.valueOf(conf.get(READ_TIMEOUT));
    }
  }

  @Override
  public void map(LongWritable key, Text value,
                  OutputCollector<Text, Text> output,
                  Reporter reporter) throws IOException {
    Path httpDest =
        new Path(jobOutputDir, taskId + "_http_" + (file++));

    InputStream is = null;
    OutputStream os = null;
    try {
      URLConnection connection =
          new URL(value.toString()).openConnection();
      connection.setConnectTimeout(connTimeoutMillis);
      connection.setReadTimeout(readTimeoutMillis);
      is = connection.getInputStream();

      os = FileSystem.get(conf).create(httpDest);

      IOUtils.copyBytes(is, os, conf, true);
    } finally {
      IOUtils.closeStream(is);
      IOUtils.closeStream(os);
    }

    output.collect(new Text(httpDest.toString()), value);
  }

  @Override
  public void close() throws IOException {
  }
}

Job Runner / Driver Code:

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;

public final class HttpDownloadMapReduce {

  public static void main(String... args) throws Exception {
    runJob(args[0], args[1]);
  }

  public static void runJob(String src, String dest)
      throws Exception {
    JobConf job = new JobConf();
    job.setJarByClass(HttpDownloadMap.class);

    FileSystem fs = FileSystem.get(job);
    Path destination = new Path(dest);

    fs.delete(destination, true);

    job.setMapperClass(HttpDownloadMap.class);

    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);

    FileInputFormat.setInputPaths(job, src);
    FileOutputFormat.setOutputPath(job, destination);

    JobClient.runJob(job);
  }
}

运行配置:

args[0] = "testData/input/urls.txt"
args[1] = "testData/output"

urls.txt包含:

http://www.google.com 
http://www.yahoo.com

1 个答案:

答案 0 :(得分:0)

尝试以下更改:

  1. 导入org.apache.hadoop.mapreduce包而不是mapred包。

  2. 将新OutputCollectorReporter更改为Context,因为新API使用Context个对象进行撰写。

  3. JobClient更改为Job,将JobConf更改为Configuration