使用JavaPairRDD实现Hadoop Map作为Spark方式

时间:2015-06-28 17:20:44

标签: hadoop apache-spark

我有一个RDD:

JavaPairRDD<Long, ViewRecord> myRDD

是通过newAPIHadoopRDD方法创建的。我有一个现有的map函数,我想以Spark方式实现它:

LongWritable one = new LongWritable(1L);

protected void map(Long key, ViewRecord viewRecord, Context context)
    throws IOException ,InterruptedException {

  String url = viewRecord.getUrl();
  long day = viewRecord.getDay();

  tuple.getKey().set(url);
  tuple.getValue().set(day);

  context.write(tuple, one);
};

PS:元组源于:

KeyValueWritable<Text, LongWritable>

,可在此处找到:TextLong.java

1 个答案:

答案 0 :(得分:2)

我不知道元组是什么,但如果您只想将记录映射到使用键(url, day)和值1L的元组,则可以这样做:

result = myRDD
    .values()
    .mapToPair(viewRecord -> {
        String url = viewRecord.getUrl();
        long day = viewRecord.getDay();
        return new Tuple2<>(new Tuple2<>(url, day), 1L);
    })


//java 7 style
JavaPairRDD<Pair, Long> result = myRDD
        .values()
        .mapToPair(new PairFunction<ViewRecord, Pair, Long>() {
                       @Override
                       public Tuple2<Pair, Long> call(ViewRecord record) throws Exception {
                           String url = record.getUrl();
                           Long day = record.getDay();

                           return new Tuple2<>(new Pair(url, day), 1L);
                       }
                   }
        );