不推荐使用Hadoop DistributedCache - 首选API是什么?

时间:2014-01-20 16:53:10

标签: java hadoop mapreduce

我的地图任务需要一些配置数据,我想通过分布式缓存分发。

Hadoop MapReduce Tutorial显示了DistributedCache类的usage,大致如下:

// In the driver
JobConf conf = new JobConf(getConf(), WordCount.class);
...
DistributedCache.addCacheFile(new Path(filename).toUri(), conf); 

// In the mapper
Path[] myCacheFiles = DistributedCache.getLocalCacheFiles(job);
...

但是,DistributedCache在Hadoop 2.2.0中为marked as deprecated

实现这一目标的新首选方式是什么?是否有涵盖此API的最新示例或教程?

6 个答案:

答案 0 :(得分:50)

可以在Job类本身中找到分布式缓存的API。请查看此处的文档:http://hadoop.apache.org/docs/stable2/api/org/apache/hadoop/mapreduce/Job.html  代码应该类似于

Job job = new Job();
...
job.addCacheFile(new Path(filename).toUri());

在您的映射器代码中:

Path[] localPaths = context.getLocalCacheFiles();
...

答案 1 :(得分:20)

要扩展@jtravaglini,将DistributedCache用于YARN / MapReduce 2的首选方法如下:

在您的驱动程序中,使用Job.addCacheFile()

public int run(String[] args) throws Exception {
    Configuration conf = getConf();

    Job job = Job.getInstance(conf, "MyJob");

    job.setMapperClass(MyMapper.class);

    // ...

    // Mind the # sign after the absolute file location.
    // You will be using the name after the # sign as your
    // file name in your Mapper/Reducer
    job.addCacheFile(new URI("/user/yourname/cache/some_file.json#some"));
    job.addCacheFile(new URI("/user/yourname/cache/other_file.json#other"));

    return job.waitForCompletion(true) ? 0 : 1;
}

在Mapper / Reducer中,覆盖setup(Context context)方法:

@Override
protected void setup(
        Mapper<LongWritable, Text, Text, Text>.Context context)
        throws IOException, InterruptedException {
    if (context.getCacheFiles() != null
            && context.getCacheFiles().length > 0) {

        File some_file = new File("./some");
        File other_file = new File("./other");

        // Do things to these two files, like read them
        // or parse as JSON or whatever.
    }
    super.setup(context);
}

答案 2 :(得分:5)

可以在org.apache.hadoop.mapreduce.Job类中找到适用于YARN / MR2的新DistributedCache API。

   Job.addCacheFile()

不幸的是,目前还没有很多全面的教程式示例。

http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapreduce/Job.html#addCacheFile%28java.net.URI%29

答案 3 :(得分:2)

我没有使用job.addCacheFile()。相反,我使用-files选项,如&#34; -files /path/to/myfile.txt#myfile"像之前一样。然后在mapper或reducer代码中我使用下面的方法:

/**
 * This method can be used with local execution or HDFS execution. 
 * 
 * @param context
 * @param symLink
 * @param throwExceptionIfNotFound
 * @return
 * @throws IOException
 */
public static File findDistributedFileBySymlink(JobContext context, String symLink, boolean throwExceptionIfNotFound) throws IOException
{
    URI[] uris = context.getCacheFiles();
    if(uris==null||uris.length==0)
    {
        if(throwExceptionIfNotFound)
            throw new RuntimeException("Unable to find file with symlink '"+symLink+"' in distributed cache");
        return null;
    }
    URI symlinkUri = null;
    for(URI uri: uris)
    {
        if(symLink.equals(uri.getFragment()))
        {
            symlinkUri = uri;
            break;
        }
    }   
    if(symlinkUri==null)
    {
        if(throwExceptionIfNotFound)
            throw new RuntimeException("Unable to find file with symlink '"+symLink+"' in distributed cache");
        return null;
    }
    //if we run this locally the file system URI scheme will be "file" otherwise it should be a symlink
    return "file".equalsIgnoreCase(FileSystem.get(context.getConfiguration()).getScheme())?(new File(symlinkUri.getPath())):new File(symLink);

}

然后在mapper / reducer中:

@Override
protected void setup(Context context) throws IOException, InterruptedException
{
    super.setup(context);

    File file = HadoopUtils.findDistributedFileBySymlink(context,"myfile",true);
    ... do work ...
}

请注意,如果我使用&#34; -files /path/to/myfile.txt"直接然后我需要使用&#34; myfile.txt&#34;访问该文件,因为这是默认的符号链接名称。

答案 4 :(得分:1)

所提到的解决方案都没有完整地为我工作。这可能是因为Hadoop版本不断变化我正在使用hadoop 2.6.4。本质上,不推荐使用DistributedCache,所以我不想使用它。由于一些帖子建议我们使用addCacheFile(),但它已经改变了一点。这是它对我有用的方式

job.addCacheFile(new URI("hdfs://X.X.X.X:9000/EnglishStop.txt#EnglishStop.txt"));

此处X.X.X.X可以是主IP地址或本地主机。 EnglishStop.txt存储在/位置的HDFS中。

hadoop fs -ls /

输出

-rw-r--r--   3 centos supergroup       1833 2016-03-12 20:24 /EnglishStop.txt
drwxr-xr-x   - centos supergroup          0 2016-03-12 19:46 /test

有趣但方便,#EnglishStop.txt意味着现在我们可以将其作为&#34; EnglishStop.txt&#34;在mapper中。这是相同的代码

public void setup(Context context) throws IOException, InterruptedException     
{
    File stopwordFile = new File("EnglishStop.txt");
    FileInputStream fis = new FileInputStream(stopwordFile);
    BufferedReader reader = new BufferedReader(new InputStreamReader(fis));

    while ((stopWord = reader.readLine()) != null) {
        // stopWord is a word read from Cache
    }
}

这对我有用。您可以从存储在HDFS中的文件中读取行

答案 5 :(得分:0)

我遇到了同样的问题。不仅是DistributedCach已被弃用,而且getLocalCacheFiles和“new Job”也被弃用。那么对我有用的是:

驱动:

Configuration conf = getConf();
Job job = Job.getInstance(conf);
...
job.addCacheFile(new Path(filename).toUri());

在Mapper / Reducer设置中:

@Override
protected void setup(Context context) throws IOException, InterruptedException
{
    super.setup(context);

    URI[] files = context.getCacheFiles(); // getCacheFiles returns null

    Path file1path = new Path(files[0])
    ...
}