无法将reducer的输出写入序列文件

时间:2013-07-29 08:32:11

标签: hadoop hbase hdfs sequencefile

我有一个Map函数和一个Reduce函数输出类Text和IntWritable的kep值对。 这只是Main函数中Map部分的要点:

TableMapReduceUtil.initTableMapperJob(
  tablename,        // input HBase table name
  scan,             // Scan instance to control CF and attribute selection
  AnalyzeMapper.class,   // mapper
  Text.class,             // mapper output key
  IntWritable.class,             // mapper output value
  job);

这是我在Main函数中的Reducer部分,它将输出写入HDFS

job.setReducerClass(AnalyzeReducerFile.class);
job.setNumReduceTasks(1);
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:54310/output_file"));

如何让reducer写入Sequence File呢?

我尝试了以下代码,但无效

job.setReducerClass(AnalyzeReducerFile.class);
job.setNumReduceTasks(1);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
SequenceFileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:54310/sequenceOutput"));

编辑:这是我运行

时得到的输出信息
WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /sequenceOutput/_temporary/_attempt_local_0001_r_000000_0/part-r-00000 File does not exist. Holder DFSClient_NONMAPREDUCE_-79044441_1 does not have any open files.
13/07/29 17:04:20 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/07/29 17:04:20 WARN hdfs.DFSClient: Could not get block locations. Source file "/sequenceOutput/_temporary/_attempt_local_0001_r_000000_0/part-r-00000" - Aborting...
13/07/29 17:04:20 ERROR hdfs.DFSClient: Failed to close file /sequenceOutput/_temporary/_attempt_local_0001_r_000000_0/part-r-00000

0 个答案:

没有答案