尝试使用适用于Java的AWS开发工具包在EMR上运行Spark,但会跳过存储在S3上的远程JAR

时间:2018-07-18 00:32:41

标签: apache-spark amazon-s3 amazon-ec2 jar amazon-emr

我正在尝试使用Java SDK来在EMR上运行Spark,但是在获取spark-submit以使用我存储在S3上的JAR时遇到了问题。这是相关代码:

public String launchCluster() throws Exception {
    StepFactory stepFactory = new StepFactory();

    // Creates a cluster flow step for debugging
    StepConfig enableDebugging = new StepConfig().withName("Enable debugging")
            .withActionOnFailure("TERMINATE_JOB_FLOW")
            .withHadoopJarStep(stepFactory.newEnableDebuggingStep());

    // Here is the original code before I tried command-runner.jar. 
    // When using this, I get a ClassNotFoundException for 
    // org.apache.spark.SparkConf. This is because for some reason, 
    // the super-jar that I'm generating doesn't include apache spark. 
    // Even so, I believe EMR should already have Spark installed if
    // I configure this correctly...

    //        HadoopJarStepConfig runExampleConfig = new HadoopJarStepConfig()
    //                .withJar(JAR_LOCATION)
    //                .withMainClass(MAIN_CLASS);

    HadoopJarStepConfig runExampleConfig = new HadoopJarStepConfig()
            .withJar("command-runner.jar")
            .withArgs(
                    "spark-submit",
                    "--master", "yarn",
                    "--deploy-mode", "cluster",
                    "--class", SOME_MAIN_CLASS,
                    SOME_S3_PATH_TO_SUPERJAR,
                    "-useSparkLocal", "false"
            );

    StepConfig customExampleStep = new StepConfig().withName("Example Step")
            .withActionOnFailure("TERMINATE_JOB_FLOW")
            .withHadoopJarStep(runExampleConfig);

    // Create Applications so that the request knows to launch
    // the cluster with support for Hadoop and Spark.

    // Unsure if Hadoop is necessary...
    Application hadoopApp = new Application().withName("Hadoop");
    Application sparkApp = new Application().withName("Spark");

    RunJobFlowRequest request = new RunJobFlowRequest().withName("spark-cluster")
            .withReleaseLabel("emr-5.15.0")
            .withSteps(enableDebugging, customExampleStep)
            .withApplications(hadoopApp, sparkApp)
            .withLogUri(LOG_URI)
            .withServiceRole("EMR_DefaultRole")
            .withJobFlowRole("EMR_EC2_DefaultRole")
            .withVisibleToAllUsers(true)
            .withInstances(new JobFlowInstancesConfig()
                    .withInstanceCount(3)
                    .withKeepJobFlowAliveWhenNoSteps(true)
                    .withMasterInstanceType("m3.xlarge")
                    .withSlaveInstanceType("m3.xlarge")
            );
    return result.getJobFlowId();
}

这些步骤已完成且没有错误,但是实际上并没有输出任何内容...当我检查日志时,stderr包括以下内容
Warning: Skip remote jar s3://somebucket/myservice-1.0-super.jar.

18/07/17 22:08:31 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.

我不确定基于日志的问题是什么。我相信我在群集上正确安装了Spark。另外,提供一些背景信息-当我将withJar与存储在S3上的超级JAR(而不是命令运行器)直接一起使用时(并且没有withArgs),它可以正确地捕获JAR,但是它不会没有安装Spark-我为SparkConf获得了ClassNotFoundException(以及JavaSparkContext,具体取决于我的Spark作业代码首先尝试创建的内容)。

任何指针将不胜感激!

1 个答案:

答案 0 :(得分:1)

我认为,如果您使用的是最新的EMR版本(例如emr-5.17.0),则--master的参数应为yarn-cluster,而不是{{1}中的yarn }声明。 我遇到了同样的问题,在进行了更改之后,它对我来说很好用。