Python 2.7.12
boto3 == 1.3.1
如何向正在运行的EMR群集添加步骤和在步骤完成后终止群集,无论其失败还是成功?
创建群集
response = client.run_job_flow(
Name=name,
LogUri='s3://mybucket/emr/',
ReleaseLabel='emr-5.9.0',
Instances={
'MasterInstanceType': instance_type,
'SlaveInstanceType': instance_type,
'InstanceCount': instance_count,
'KeepJobFlowAliveWhenNoSteps': True,
'Ec2KeyName': 'KeyPair',
'EmrManagedSlaveSecurityGroup': 'sg-1234',
'EmrManagedMasterSecurityGroup': 'sg-1234',
'Ec2SubnetId': 'subnet-1q234',
},
Applications=[
{'Name': 'Spark'},
{'Name': 'Hadoop'}
],
BootstrapActions=[
{
'Name': 'Install Python packages',
'ScriptBootstrapAction': {
'Path': 's3://mybucket/code/spark/bootstrap_spark_cluster.sh'
}
}
],
VisibleToAllUsers=True,
JobFlowRole='EMR_EC2_DefaultRole',
ServiceRole='EMR_DefaultRole',
Configurations=[
{
'Classification': 'spark',
'Properties': {
'maximizeResourceAllocation': 'true'
}
},
],
)
添加一个步骤
response = client.add_job_flow_steps(
JobFlowId=cluster_id,
Steps=[
{
'Name': 'Run Step',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--py-files',
's3://mybucket/code/spark/spark_udfs.py',
's3://mybucket/code/spark/{}'.format(spark_script),
'--some-arg'
],
'Jar': 'command-runner.jar'
}
}
]
)
这成功添加了一个步骤并运行,但是,当步骤成功完成时,我希望群集自动终止,如AWS CLI中所述:http://docs.aws.amazon.com/cli/latest/reference/emr/create-cluster.html
答案 0 :(得分:4)
在您的情况下(使用boto3创建集群),您可以添加这些标志
您的群集创建'TerminationProtected': False, 'AutoTerminate': True,
。这样,在您完成运行步骤后,将关闭群集。
另一种解决方案是在您要运行的步骤之后立即添加另一个步骤以终止群集。所以基本上你需要运行这个命令作为步骤
aws emr terminate-clusters --cluster-ids your_cluster_id
棘手的部分是检索cluster_id。 在这里你可以找到一些解决方案:Does an EMR master node know it's cluster id?
答案 1 :(得分:0)
建议的'AutoTerminate': True
参数对我不起作用。但是,当我将参数'KeepJobFlowAliveWhenNoSteps'
从True
设置为False
时,它起作用了。您的代码应如下所示:
response = client.run_job_flow(
Name=name,
LogUri='s3://mybucket/emr/',
ReleaseLabel='emr-5.9.0',
Instances={
'MasterInstanceType': instance_type,
'SlaveInstanceType': instance_type,
'InstanceCount': instance_count,
'KeepJobFlowAliveWhenNoSteps': False,
'Ec2KeyName': 'KeyPair',
'EmrManagedSlaveSecurityGroup': 'sg-1234',
'EmrManagedMasterSecurityGroup': 'sg-1234',
'Ec2SubnetId': 'subnet-1q234',
},
Applications=[
{'Name': 'Spark'},
{'Name': 'Hadoop'}
],
BootstrapActions=[
{
'Name': 'Install Python packages',
'ScriptBootstrapAction': {
'Path': 's3://mybucket/code/spark/bootstrap_spark_cluster.sh'
}
}
],
VisibleToAllUsers=True,
JobFlowRole='EMR_EC2_DefaultRole',
ServiceRole='EMR_DefaultRole',
Configurations=[
{
'Classification': 'spark',
'Properties': {
'maximizeResourceAllocation': 'true'
}
},
],
)
答案 2 :(得分:0)
您可以通过在实例参数中指定'KeepJobFlowAliveWhenNoSteps':False,创建运行所有步骤后自动终止的短暂集群。我在GitHub中添加了一个完整的示例,说明了如何执行此操作。
以下是演示中的一些代码:
CMake Error at CMakeLists.txt:3 (find_package):
Could not find a package configuration file provided by "libwebsockets"
with any of the following names:
libwebsocketsConfig.cmake
libwebsockets-config.cmake
Add the installation prefix of "libwebsockets" to CMAKE_PREFIX_PATH or set
"libwebsockets_DIR" to a directory containing one of the above files. If
"libwebsockets" provides a separate development package or SDK, be sure it
has been installed.
-- Configuring incomplete, errors occurred!
See also "/home/user/ws/CMakeFiles/CMakeOutput.log".
下面是一些使用一些实际参数调用此函数的代码:
def run_job_flow(
name, log_uri, keep_alive, applications, job_flow_role, service_role,
security_groups, steps, emr_client):
try:
response = emr_client.run_job_flow(
Name=name,
LogUri=log_uri,
ReleaseLabel='emr-5.30.1',
Instances={
'MasterInstanceType': 'm5.xlarge',
'SlaveInstanceType': 'm5.xlarge',
'InstanceCount': 3,
'KeepJobFlowAliveWhenNoSteps': keep_alive,
'EmrManagedMasterSecurityGroup': security_groups['manager'].id,
'EmrManagedSlaveSecurityGroup': security_groups['worker'].id,
},
Steps=[{
'Name': step['name'],
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': ['spark-submit', '--deploy-mode', 'cluster',
step['script_uri'], *step['script_args']]
}
} for step in steps],
Applications=[{
'Name': app
} for app in applications],
JobFlowRole=job_flow_role.name,
ServiceRole=service_role.name,
EbsRootVolumeSize=10,
VisibleToAllUsers=True
)
cluster_id = response['JobFlowId']
logger.info("Created cluster %s.", cluster_id)
except ClientError:
logger.exception("Couldn't create cluster.")
raise
else:
return cluster_id