Hadoop:在Mapper的输出中使用自定义对象

时间:2014-03-06 06:27:33

标签: java hadoop mapreduce

我是Hadoop的新手并且被某些东西困住了:

我要做的是接收文件中的文本条目列表,让初始映射器对它们进行一些处理,然后输出一个由reducer聚合的自定义对象。

我使用所有文本值组合了一个框架OK - 但是当我尝试更改为使用我们自己的对象时,我得到一个NPE(如下所示)

这是Driver的run():

JobConf conf = new JobConf( getConf(), VectorConPreprocessor.class );
conf.setJobName( JOB_NAME + " - " + JOB_ISODATE );           
m_log.info("JOB NAME:  " + conf.getJobName() );

// Probably need to change this to be a chain-mapper later on . . . . 

conf.setInputFormat(  TextInputFormat.class          );    // reading text from files

conf.setMapperClass(         MapMVandSamples.class  );
conf.setMapOutputValueClass( SparsenessFilter.class );

//conf.setCombinerClass( CombineSparsenessTrackers.class );  // not using combiner, because ALL nodes must be gathered before reduction     
conf.setReducerClass(  ReduceSparsenessTrackers.class  );    // not sure reducing is required here . . . . 

conf.setOutputKeyClass(   Text.class );    // output key will be the SHA2
conf.setOutputValueClass( Text.class );    // output value will be the FeatureVectorMap
conf.setOutputFormat(     SequenceFileOutputFormat.class );    // binary object writer          

这是Mapper:

public class MapMVandSamples extends MapReduceBase implements Mapper<LongWritable, Text, Text, SparsenessFilter> 
{

    public static final String DELIM = ":";
    protected static Logger m_log    = Logger.getLogger( MapMVandSamples.class );    

    // In this case we're reading a line of text at a time from the file
    // We don't really care about the SHA256 for now, just create a SparsenessFilter
    //   for each entry.  The reducer will aggregate them later.
    @Override
    public void map( LongWritable bytePosition, Text lineOfText, OutputCollector<Text, SparsenessFilter> outputCollector, Reporter reporter ) throws IOException
    {                
        String[] data = lineOfText.toString().split( DELIM, 2 );
        String sha256 = data[0];
        String json   = data[1];

        // create a SparsenessFilter for this record
        SparsenessFilter sf = new SparsenessFilter();
        // crunching goes here

        outputCollector.collect( new Text("AllOneForNow"), sf );    
    }

}

最后,错误:

14/03/05 21:56:56 INFO mapreduce.Job: Task Id : attempt_1394084907462_0002_m_000000_1, Status : FAILED
Error: java.lang.NullPointerException
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:989)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:390)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

有什么想法吗? 我是否需要在SparsenessFilter上实现一个接口才能让Mapper的OutputCollector处理它?<​​/ p>

谢谢!

2 个答案:

答案 0 :(得分:2)

所有自定义键和值都应实现WritableComparable接口。

您需要实现readFields(DataInput in)&amp;写(DataOutput out)&amp;也比较。

Example

答案 1 :(得分:1)

Hadoop TextIntWritable都实现了这些接口:

  1. 可比
  2. 可写
  3. WritableComparable
  4. 我没有找到任何关于Key或Value类需要实现什么的文档,但是Comparable接口可能与Key类相关,并且Writable接口是相关的成为Value