hadoop中的多个输入格式作为单个格式

时间:2013-11-26 10:09:26

标签: java xml hadoop

我面临以下情况。请帮助我。我使用hadoop Mapreduce来处理XML文件。

通过引用此网站我能够记录我的记录https://gist.github.com/sritchie/808035 但是当XML文件的大小大于块大小时,我没有得到正确的值 所以我需要阅读整个文件 为此我得到了这个链接

https://github.com/pyongjoo/MapReduce-Example/blob/master/mysrc/XmlInputFormat.java

但现在的问题是如何将两个inputformat实现为单个inputformat

请尽快帮助我 谢谢

更新

public class XmlParser11
{

        public static class XmlInputFormat1 extends TextInputFormat {

        public static final String START_TAG_KEY = "xmlinput.start";
        public static final String END_TAG_KEY = "xmlinput.end";

        @Override
    protected boolean isSplitable(JobContext context, Path file) {
        return false;
        }


        public RecordReader<LongWritable, Text> createRecordReader(InputSplit split, TaskAttemptContext context) {
            return new XmlRecordReader();
        }

        /**
         * XMLRecordReader class to read through a given xml document to output
         * xml blocks as records as specified by the start tag and end tag
         *
         */


        public static class XmlRecordReader extends RecordReader<LongWritable, Text> {
            private byte[] startTag;
            private byte[] endTag;
            private long start;
            private long end;
            private FSDataInputStream fsin;
            private DataOutputBuffer buffer = new DataOutputBuffer();

            private LongWritable key = new LongWritable();
            private Text value = new Text();
            @Override
            public void initialize(InputSplit split, TaskAttemptContext context)
                    throws IOException, InterruptedException {
                Configuration conf = context.getConfiguration();
                startTag = conf.get(START_TAG_KEY).getBytes("utf-8");
                endTag = conf.get(END_TAG_KEY).getBytes("utf-8");
                FileSplit fileSplit = (FileSplit) split;

但没有工作

1 个答案:

答案 0 :(得分:1)

使用 isSplitable 属性指定no来拆分文件(即使达到了块大小)。这通常在您希望确保单个映射器应处理大文件时使用。

public class XmlInputFormat extends FileInputFormat {
@Override
 protected boolean isSplitable(JobContext context, Path file) {
 return false;
}

@Override
 public RecordReader<LongWritable, Text> createRecordReader(InputSplit split,TaskAttemptContext context)
 throws IOException {
  // return your version of XML record reader
 }
}

或者,您也可以使用以下方法增加每个分割的块大小:

// Set the maximum split size
setMaxSplitSize(MAX_INPUT_SPLIT_SIZE);