无法拦截与Aspects

时间:2015-09-09 10:10:03

标签: java mapreduce aspectj

我正在尝试将AspectJ与MapReduce示例一起使用,尽管我不理解一件事。但首先,让我告诉你我的代码。

[1] Wordcount示例

package org.apache.hadoop.mapred.examples;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;


/**
 * Common Wordcount example
 */
public class WordCount {


    public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            output.collect(word, one);
        }
    }
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
    }
}

public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
    conf.setJobName("wordcount");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(IntWritable.class);

    conf.setMapperClass(Map.class);
    conf.setCombinerClass(Reduce.class);
    conf.setReducerClass(Reduce.class);
    conf.setNumReduceTasks(2);

    conf.setInputFormat(TextInputFormat.class);
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path(args[0]));
    FileOutputFormat.setOutputPath(conf, new Path(args[1]));

    JobClient.runJob(conf);
    }
}

[2]我的mapreduce方面

package org.apache.hadoop.mapred.aspects;

import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;

@Aspect
public class MapReduceAspects {
    @Before("execution(* map(..))")
    public void mymap(JoinPoint joinPoint) {
        System.out.println("My Map Execution: " + joinPoint.getArgs() +   ":" + joinPoint.getTarget());
        Object[] obj = joinPoint.getArgs();
        for (Object o : obj){
           System.out.println(o.toString());
        }
    }

@Before("execution(* reduce(..))")
  public void myreduce() { System.out.println("My Reduce Execution"); }


  @Before("execution(* collect(..))")
  public void updatehash(JoinPoint joinPoint) {
      System.out.println("Output collect: Args: " + joinPoint.getArgs());

  }
}

```

[3] bean-aspects.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
   xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context"
   xmlns:jee="http://www.springframework.org/schema/jee" xmlns:tx="http://www.springframework.org/schema/tx"
   xmlns:task="http://www.springframework.org/schema/task"
   xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.2.xsd">

<aop:aspectj-autoproxy proxy-target-class="true">
    <aop:include name="mapreduceAspect"/>
</aop:aspectj-autoproxy>

<bean id="mapreduceAspect" class="org.apache.hadoop.mapred.aspects.MapReduceAspects"/></beans>

[4] OutputCollector接口

package org.apache.hadoop.mapred;

import java.io.IOException;
import org.apache.hadoop.classification.InterfaceAudience.Public;
import org.apache.hadoop.classification.InterfaceStability.Stable;

@Public
@Stable
public interface OutputCollector<K, V> {
   void collect(K var1, V var2) throws IOException;
}

在[1]中,我有一个带有mapreduce功能的wordcount示例。当我在MapReduce框架中启动我的应用程序时,框架将创建一个将执行mapreduce函数的作业。 map函数接受输入目录,reduce函数输出结果。

我可以拦截使用AspectJ的mapreduce函数调用,但是我无法拦截collect中的output.collect(word, one)调用map功能。为什么会这样?是因为界面中的collect方法没有注释[4]吗?或者我没有正确配置Aspects?

如果有人能解释我为什么AspectJ的表现如此,我很乐意。

谢谢,

1 个答案:

答案 0 :(得分:1)

答案很简单:

  • mapreduce方法属于您自己的代码,即它们受execution()切入点的纵横编织。
  • collect方法位于第三方库中,在正常情况下不受编辑方面的影响。因此,您不能使用execution()切入点来拦截它,只能使用call()切入点。
  • 尝试类似call(* collect(..))的内容,它会起作用。
  • 警告:Spring AOP不支持call()切入点,你必须使用成熟的AspectJ才能使用它。有关如何激活AspectJ LTW(加载时编织)的更多信息,请参见Spring手册的第10.8 Using AspectJ with Spring applications章。
  • 如果您通过LTW或二进制CTW(编译时编织)使用完整的AspectJ,也可以编织第三方代码并使用execution()切入点。您只需确保在任何目标第三方代码之前加载编织代理,这通常是因为这是Java代理的发明。