Hadoop DistributedCache对象在作业期间更改

时间:2013-04-08 17:56:18

标签: java hadoop amazon-web-services mapreduce elastic-map-reduce

我正在尝试在AWS上运行KMeans,并且在尝试从DistributedCache读取更新的集群质心时遇到以下异常:

java.io.IOException: The distributed cache object s3://mybucket/centroids_6/part-r-00009 changed during the job from 4/8/13 2:20 PM to 4/8/13 2:20 PM
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.downloadCacheObject(TrackerDistributedCacheManager.java:401)
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.localizePublicCacheObject(TrackerDistributedCacheManager.java:475)
at org.apache.hadoop.filecache.TrackerDistributedCacheManager.getLocalCache(TrackerDistributedCacheManager.java:191)
at org.apache.hadoop.filecache.TaskDistributedCacheManager.setupCache(TaskDistributedCacheManager.java:182)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1246)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1237)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1152)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2541)
at java.lang.Thread.run(Thread.java:662)

这个问题apart from this one的设定是这个错误间歇出现的事实。我在较小的数据集上成功运行了相同的代码。此外,当我将质心数从12(在上面的代码中看到)更改为8时,它会在迭代5而不是6上失败(您可以在上面的centroids_6名称中看到)。

以下是运行KMeans循环的主驱动程序中的相关DistributedCache代码:

    int iteration = 1;
    long changes = 0; 
    do {
        // First, write the previous iteration's centroids to the dist cache.
        Configuration iterConf = new Configuration();
        Path prevIter = new Path(centroidsPath.getParent(),
                String.format("centroids_%s", iteration - 1));
        FileSystem fs = prevIter.getFileSystem(iterConf);
        Path pathPattern = new Path(prevIter, "part-*");
        FileStatus [] list = fs.globStatus(pathPattern);
        for (FileStatus status : list) {
            DistributedCache.addCacheFile(status.getPath().toUri(), iterConf);
        }

        // Now, set up the job.
        Job iterJob = new Job(iterConf);
        iterJob.setJobName("KMeans " + iteration);
        iterJob.setJarByClass(KMeansDriver.class);
        Path nextIter = new Path(centroidsPath.getParent(), 
                String.format("centroids_%s", iteration));
        KMeansDriver.delete(iterConf, nextIter);

        // Set input/output formats.
        iterJob.setInputFormatClass(SequenceFileInputFormat.class);
        iterJob.setOutputFormatClass(SequenceFileOutputFormat.class);

        // Set Mapper, Reducer, Combiner
        iterJob.setMapperClass(KMeansMapper.class);
        iterJob.setCombinerClass(KMeansCombiner.class);
        iterJob.setReducerClass(KMeansReducer.class);

        // Set MR formats.
        iterJob.setMapOutputKeyClass(IntWritable.class);
        iterJob.setMapOutputValueClass(VectorWritable.class);
        iterJob.setOutputKeyClass(IntWritable.class);
        iterJob.setOutputValueClass(VectorWritable.class);

        // Set input/output paths.
        FileInputFormat.addInputPath(iterJob, data);
        FileOutputFormat.setOutputPath(iterJob, nextIter);

        iterJob.setNumReduceTasks(nReducers);

        if (!iterJob.waitForCompletion(true)) {
            System.err.println("ERROR: Iteration " + iteration + " failed!");
            System.exit(1);
        }
        iteration++;
        changes = iterJob.getCounters().findCounter(KMeansDriver.Counter.CONVERGED).getValue();
        iterJob.getCounters().findCounter(KMeansDriver.Counter.CONVERGED).setValue(0);
    } while (changes > 0);

如何修改文件?我能想到的唯一可能性是,在一次迭代完成后,循环在上一个作业的质心完成写入之前再次开始。但是在评论中,我使用waitForCompletion(true)调用作业,因此在循环重新开始时不应该运行作业的任何剩余部分。有什么想法吗?

1 个答案:

答案 0 :(得分:0)

这不是一个真正的答案,但我确实意识到以我的方式使用DistributedCache是​​愚蠢的,而不是直接从HDFS读取前一次迭代的结果。我改为在主驱动程序中编写了这个方法:

public static HashMap<Integer, VectorWritable> readCentroids(Configuration conf, Path path)
        throws IOException {
    HashMap<Integer, VectorWritable> centroids = new HashMap<Integer, VectorWritable>();
    FileSystem fs = FileSystem.get(path.toUri(), conf);
    FileStatus [] list = fs.globStatus(new Path(path, "part-*"));
    for (FileStatus status : list) {
        SequenceFile.Reader reader = new SequenceFile.Reader(fs, status.getPath(), conf);
        IntWritable key = null;
        VectorWritable value = null;
        try {
            key = (IntWritable)reader.getKeyClass().newInstance();
            value = (VectorWritable)reader.getValueClass().newInstance();
        } catch (InstantiationException e) {
            e.printStackTrace();
        } catch (IllegalAccessException e) {
            e.printStackTrace();
        }
        while (reader.next(key, value)) {
            centroids.put(new Integer(key.get()),
                    new VectorWritable(value.get(), value.getClusterId(), value.getNumInstances()));
        }
        reader.close();
    }
    return centroids;
}

在每次迭代期间,在Mapper和Reducer的setup()方法中调用此方法,以读取上一次迭代的质心。

protected void setup(Context context) throws IOException {
    Configuration conf = context.getConfiguration();
    Path centroidsPath = new Path(conf.get(KMeansDriver.CENTROIDS));
    centroids = KMeansDriver.readCentroids(conf, centroidsPath);
}

这允许我在原始问题中删除循环中的代码块,该问题将质心写入DistributedCache。我对它进行了测试,现在它适用于大型和小型数据集。

我仍然不知道为什么我收到了我发布的错误(如何更改只读的DistributedCache中的某些内容?特别是当我在每次迭代时更改HDFS路径时?),但这似乎都有效并且是一种阅读质心的黑客方式。