据我所知,分布式缓存将文件复制到每个节点,然后映射或减少从本地文件系统读取文件。
我的问题是:有没有办法可以使用Hadoop分布式缓存将文件放入内存,以便每个map或reduce都能直接从内存中读取文件?
我的MapReduce程序向每个节点分发一个大约1M的png图片,然后每个地图任务从分布式缓存中读取图片,并使用地图输入中的另一张图片进行一些图像处理。
答案 0 :(得分:2)
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.net.URI;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
Path[] uris = DistributedCache.getLocalCacheFiles(context
.getConfiguration());
try{
BufferedReader readBuffer1 = new BufferedReader(new FileReader(uris[0].toString()));
String line;
while ((line=readBuffer1.readLine())!=null){
System.out.println(line);
}
readBuffer1.close();
}
catch (Exception e){
System.out.println(e.toString());
}
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
int length=key.getLength();
System.out.println("length"+length);
result.set(sum);
/* key.set("lenght"+lenght);*/
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
final String NAME_NODE = "hdfs://localhost:9000";
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
DistributedCache.addCacheFile(new URI(NAME_NODE
+ "/dataset1.txt"),
job.getConfiguration());
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
答案 1 :(得分:0)
就上面的代码示例而言,它不回答原始问题。此外,它还展示了非最佳代码示例。理想情况下,您应该作为setup()方法的一部分访问缓存文件,并缓存您可能希望用作map()方法一部分的任何信息。在上面的示例中,缓存文件将针对每个键值对读取一次,这会影响mapreduce作业的性能。