无法在Amazon EMR Cluster上使用PIG 0.12.0和Hadoop 2.4.0找到MySql驱动程序

时间:2015-05-26 15:27:33

标签: java mysql hadoop amazon-web-services apache-pig

我正在使用SWF来运行创建EMR集群的工作流,在该集群上运行PIG脚本。我试图用PIG 0.12.0和Hadoop 2.4.0运行它,并且在脚本尝试使用org.apache.pig.piggybank.storage.DBStorage存储到RDS中的MySql数据库时,这是一个例外扔了:

2015-05-26 14:36:47,057 [main] ERROR org.apache.pig.piggybank.storage.DBStorage - 
    can't load DB driver:com.mysql.jdbc.Driver
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
  at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
  at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:191)
  at org.apache.pig.piggybank.storage.DBStorage.<init>(DBStorage.java:66)

以前使用Pig 0.11.1和Hadoop 1.0.3。 SWF工作流和活动使用Java AWS SDK版本1.9.19以Java编写。在更广泛的互联网上搜索信息表明需要修改PIG_CLASSPATH以包含MySql连接器JAR - 目前脚本包含

REGISTER $LIB_PATH/mysql-connector-java-5.1.26.jar;

其中$ LIB_PATH是一个S3位置,但有人认为这对于Pig 0.12.0 + Hadoop 2.4.0已经不够了

构造用于启动集群的请求的代码如下所示

public final RunJobFlowRequest constructRequest(final List<String> params) {
    ConductorContext config = ContextHolder.get();

    final JobFlowInstancesConfig instances = new JobFlowInstancesConfig().withInstanceCount(config.getEmrInstanceCount())
            .withMasterInstanceType(config.getEmrMasterType()).withSlaveInstanceType(config.getEmrSlaveType())
            .withKeepJobFlowAliveWhenNoSteps(false).withHadoopVersion(config.getHadoopVersion());

    if (!StringUtils.isBlank(config.getEmrEc2SubnetId())) {
        instances.setEc2SubnetId(config.getEmrEc2SubnetId());
    }

    final BootstrapActionConfig bootStrap = new BootstrapActionConfig().withName("Bootstrap Pig").withScriptBootstrapAction(
            new ScriptBootstrapActionConfig().withPath(config.getEmrBootstrapPath()).withArgs(config.getEmrBootstrapArgs()));

    final StepFactory stepFactory = new StepFactory();
    final List<StepConfig> steps = new LinkedList<>();

    steps.add(new StepConfig().withName("Enable Debugging").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
            .withHadoopJarStep(stepFactory.newEnableDebuggingStep()));

    steps.add(new StepConfig().withName("Install Pig").withActionOnFailure(ActionOnFailure.TERMINATE_JOB_FLOW)
            .withHadoopJarStep(stepFactory.newInstallPigStep(config.getPigVersion())));

    for (final PigScript originalScript : config.getScripts()) {
        ArrayList<String> newParams = new ArrayList<>();

        newParams.addAll(Arrays.asList(originalScript.getScriptParams()));
        newParams.addAll(params);

        final PigScript script = new PigScript(originalScript.getName(), originalScript.getScriptUrl(),
                AWSHelper.burstParameters(newParams.toArray(new String[newParams.size()])));

        steps.add(new StepConfig()
                .withName(script.getName())
                .withActionOnFailure(ActionOnFailure.CONTINUE)
                .withHadoopJarStep(
                        stepFactory.newRunPigScriptStep(script.getScriptUrl(), config.getPigVersion(), script.getScriptParams())));
    }

    final RunJobFlowRequest request = new RunJobFlowRequest().withName(makeRunJobName()).withSteps(steps).withVisibleToAllUsers(true)
            .withBootstrapActions(bootStrap).withLogUri(config.getEmrLogUrl()).withInstances(instances);

    return request;
}

1 个答案:

答案 0 :(得分:1)

在我的案例中,解决方案是修改引导捆绑群集时使用的shell脚本,以将合适的JAR复制到位

wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.34/mysql-connector-java-5.1.34.jar -O $PIG_CLASSPATH/mysql-connector-java-5.1.34.jar

总而言之,对于Hadoop 2.4.0和Pig 0.12.0,在脚本中注册JAR已经不够了,JAR必须在Pig被调用时可用,确保它在$ PIG_CLASSPATH