使用lambda函数通过火花步骤创建AWS EMR集群失败,并显示“本地文件不存在”

时间:2018-07-27 00:38:14

标签: amazon-web-services apache-spark aws-lambda amazon-emr

我正在尝试使用Lambda函数通过Spark步骤启动EMR集群。

这是我的lambda函数(python 2.7):

import boto3

def lambda_handler(event, context):
    conn = boto3.client("emr")        
    cluster_id = conn.run_job_flow(
        Name='LSR Batch Testrun',
        ServiceRole='EMR_DefaultRole',
        JobFlowRole='EMR_EC2_DefaultRole',
        VisibleToAllUsers=True,
        LogUri='s3n://aws-logs-171256445476-ap-southeast-2/elasticmapreduce/',
        ReleaseLabel='emr-5.16.0',
        Instances={
            "Ec2SubnetId": "<my-subnet>",
            'InstanceGroups': [
                {
                    'Name': 'Master nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'MASTER',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 1,
                },
                {
                    'Name': 'Slave nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'CORE',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 2,
                }
            ],
            'KeepJobFlowAliveWhenNoSteps': False,
            'TerminationProtected': False
        },
        Applications=[{
            'Name': 'Spark',
            'Name': 'Hive'
        }],
        Configurations=[
          {
            "Classification": "hive-site",
            "Properties": {
              "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
            }
          },
          {
            "Classification": "spark-hive-site",
            "Properties": {
              "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
            }
          }
        ],
        Steps=[{
            'Name': 'mystep',
            'ActionOnFailure': 'TERMINATE_CLUSTER',
            'HadoopJarStep': {
                'Jar': 's3://elasticmapreduce/libs/script-runner/script-runner.jar',
                'Args': [
                    "/home/hadoop/spark/bin/spark-submit", "--deploy-mode", "cluster",
                    "--master", "yarn-cluster", "--class", "org.apache.spark.examples.SparkPi", 
                    "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
                ]
            }
        }],
    )
    return "Started cluster {}".format(cluster_id)

集群正在启动,但是在尝试执行该步骤时会失败。错误日志包含以下异常:

Exception in thread "main" java.lang.RuntimeException: Local file does not exist.
    at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.fetchFile(ScriptRunner.java:30)
    at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.main(ScriptRunner.java:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:148)

因此,似乎脚本运行者不了解如何从S3中拾取.jar文件?

任何帮助表示赞赏...

2 个答案:

答案 0 :(得分:0)

并非所有预先构建的EMR都具有从S3复制jar,脚本的功能,因此您必须在引导步骤中执行此操作:

BootstrapActions=[
    {
        'Name': 'Install additional components',
        'ScriptBootstrapAction': {
            'Path': code_dir + '/scripts' + '/emr_bootstrap.sh'
        }
    }
],

这是我的引导程序所做的

#!/bin/bash
HADOOP="/home/hadoop"
BUCKET="s3://<yourbucket>/<path>"

# Sync jars libraries
aws s3 sync ${BUCKET}/jars/ ${HADOOP}/
aws s3 sync ${BUCKET}/scripts/ ${HADOOP}/

# Install python packages
sudo pip install --upgrade pip
sudo ln -s /usr/local/bin/pip /usr/bin/pip
sudo pip install psycopg2 numpy boto3 pythonds

然后您可以像这样调用脚本和jar

 {
        'Name': 'START YOUR STEP',
        'ActionOnFailure': 'TERMINATE_CLUSTER',
        'HadoopJarStep': {
            'Jar': 'command-runner.jar',
            'Args': [
                "spark-submit", "--jars", ADDITIONAL_JARS,
                "--py-files", "/home/hadoop/modules.zip",
                "/home/hadoop/<your code>.py"
            ]
        }
    },

答案 1 :(得分:0)

我最终可以解决问题。主要问题是损坏的“应用程序”配置,它的外观类似于以下内容:

Applications=[{
       'Name': 'Spark'
    },
    {
       'Name': 'Hive'
    }],

最后的步骤元素:

   Steps=[{
            'Name': 'lsr-step1',
            'ActionOnFailure': 'TERMINATE_CLUSTER',
            'HadoopJarStep': {
                'Jar': 'command-runner.jar',
                 'Args': [
                     "spark-submit", "--class", "org.apache.spark.examples.SparkPi", 
                     "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
                 ]
            }
        }]