分布式缓存不起作用

时间:2014-01-22 07:02:45

标签: hadoop mapreduce distributed-caching

我在分布式缓存中存储少量数据(几个MB),并使用它来执行两个大文件的反连接。对于缓存中的几行数据,功能正常,但是当缓存生产中有更多数据时,它无法完成工作,但也不会抛出任何错误。只是只有少数记录(大约20%)正在加入,而其他记录却被忽略了。那么可以存储在分布式缓存中的记录数量上限是多少?为什么它会为某些记录工作并忽略其余的记录?任何建议都会非常有帮助。 贝娄是我的代码

        public class MyMapper extends Mapper<LongWritable, Text, Text, TextPair> {

            Text albumKey = new Text();
            Text photoKey = new Text();
            private HashSet<String> photoDeleted = new HashSet<String>();

            private HashSet<String> albDeleted = new HashSet<String>();
            Text interKey = new Text();
            private TextPair interValue = new TextPair();
            private static final Logger LOGGER = Logger.getLogger(SharedStreamsSlMapper.class);

            protected void setup(Context context) throws IOException, InterruptedException {
                int count=0;
                Path[] cacheFiles = DistributedCache.getLocalCacheFiles(context.getConfiguration());
                System.out.println(cacheFiles.length);
                LOGGER.info(cacheFiles+"****");
                try {
                    if (cacheFiles != null && cacheFiles.length > 0) {
                        for (Path path : cacheFiles) {
                            String line;
                            String[] tokens;

                            BufferedReader joinReader = new BufferedReader(new FileReader(path.toString()));
                            System.out.println(path.toString());
        //                  BufferedReader joinReader = new BufferedReader(new FileReader("/Users/Kunal_Basak/Desktop/ss_test/dsitCache/part-m-00000"));
                            try {
                                while ((line = joinReader.readLine()) != null) {
                                    count++;
                                    tokens = line.split(SSConstants.TAB, 2);
                                    if(tokens.length<2){
                                        System.out.println("WL");
                                        continue;
                                    }
                                    if (tokens[0].equals("P")) {
                                        photoDeleted.add(tokens[1]);
                                    }
                                    else if (tokens[0].equals("A")) {
                                        albDeleted.add(tokens[1]);
                                    }
                                }
                            }
                            finally {
                                joinReader.close();
                            }
                        }
                    }
                }
                catch (IOException e) {
                    System.out.println("Exception reading DistributedCache: " + e);
                }
                System.out.println(count);
                System.out.println("albdeleted *****"+albDeleted.size());
                System.out.println("photo deleted *****"+photoDeleted.size());
                LOGGER.info("albdeleted *****"+albDeleted.size());
                LOGGER.info("albdeleted *****"+albDeleted.size());
            }

            public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        try{
    //my mapper code
    }
    }
    }

1 个答案:

答案 0 :(得分:0)

根据blog article

  

local.cache.size参数控制的大小   DistributedCache。

     

默认情况下,它设置为10 GB。

因此,如果缓存中的缓存超过10GB,那可能就是您的问题。