通过AWS [EMR]

时间:2017-04-15 09:41:44

标签: amazon-web-services apache-spark cloud hdfs emr

您好我是云计算的新手,所以我为(也许)这个愚蠢的问题道歉。我需要帮助才能知道我所做的实际上是在集群上计算还是仅仅在主服务器上计算(无用的东西)。

我能做什么: 我可以使用AWS控制台在所有节点上安装Spark,从而建立一个具有一定数量节点的集群。我可以通过SSH连接到主节点。它需要什么才能在集群上使用Spark代码运行我的jar。

我做什么: 我打电话给spark-submit来运行我的代码:

spark-submit --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments] 

MY DOUBTS:

  1. 是否需要使用--master和the指定master "火花://"大师的参考?我在哪里可以找到它 参考?我应该在sbin / start-master.sh中运行脚本来启动 一个独立的集群管理器还是已经设置好了?如果我运行代码 上面我想象代码只能在master上本地运行, 右

  2. 我可以仅将输入文件保存在主节点上吗?假设我想要     要计算一个巨大的文本文件的单词,我可以只保留它     主人的磁盘?或者为了保持我需要的并行性     分布式内存如HDFS?我不明白这一点,我会保留它     在主节点磁盘上是否适合。

  3. 非常感谢你的回复。

    UPDATE1: 我试图在集群上运行Pi示例,但我无法得到结果。

    $ sudo spark-submit   --class org.apache.spark.examples.SparkPi   --master yarn   --deploy-mode cluster   /usr/lib/spark/examples/jars/spark-examples.jar   10
    

    我希望得到一行打印Pi is roughly 3.14...而不是我得到:

    17/04/15 13:16:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    17/04/15 13:16:03 INFO RMProxy: Connecting to ResourceManager at ip-172-31-37-222.us-west-2.compute.internal/172.31.37.222:8032
    17/04/15 13:16:03 INFO Client: Requesting a new application from cluster with 2 NodeManagers 
    17/04/15 13:16:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
    17/04/15 13:16:03 INFO Client: Will allocate AM container, with 5120 MB memory including 465 MB overhead
    17/04/15 13:16:03 INFO Client: Setting up container launch context for our AM
    17/04/15 13:16:03 INFO Client: Setting up the launch environment for our AM container
    17/04/15 13:16:03 INFO Client: Preparing resources for our AM container
    17/04/15 13:16:06 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
    17/04/15 13:16:10 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_libs__5838015067814081789.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_libs__5838015067814081789.zip
    17/04/15 13:16:12 INFO Client: Uploading resource file:/usr/lib/spark/examples/jars/spark-examples.jar -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/spark-examples.jar
    17/04/15 13:16:12 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_conf__1370316719712336297.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_conf__.zip
    17/04/15 13:16:13 INFO SecurityManager: Changing view acls to: root
    17/04/15 13:16:13 INFO SecurityManager: Changing modify acls to: root
    17/04/15 13:16:13 INFO SecurityManager: Changing view acls groups to: 
    17/04/15 13:16:13 INFO SecurityManager: Changing modify acls groups to: 
    17/04/15 13:16:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
    
    17/04/15 13:16:13 INFO Client: Submitting application application_1492261407069_0007 to ResourceManager
    17/04/15 13:16:13 INFO YarnClientImpl: Submitted application application_1492261407069_0007
    17/04/15 13:16:14 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
    17/04/15 13:16:14 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1492262173096
         final status: UNDEFINED
         tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
         user: root
    17/04/15 13:16:15 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
    17/04/15 13:16:24 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
    17/04/15 13:16:25 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
    17/04/15 13:16:25 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 172.31.33.215
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1492262173096
         final status: UNDEFINED
         tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
         user: root
    17/04/15 13:16:26 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
    17/04/15 13:16:55 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
    17/04/15 13:16:56 INFO Client: Application report for application_1492261407069_0007 (state: FINISHED)
    17/04/15 13:16:56 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 172.31.33.215
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1492262173096
         final status: SUCCEEDED
         tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
         user: root
    17/04/15 13:16:56 INFO ShutdownHookManager: Shutdown hook called
    17/04/15 13:16:56 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9
    

2 个答案:

答案 0 :(得分:2)

回答第一个疑问:

我假设你想在纱线上运行火花。 您只需传递--master yarn --deploy-mode cluster,Spark驱动程序在由群集上的YARN管理的应用程序主进程内运行

spark-submit --master yarn  --deploy-mode cluster \
    --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments] 
其他模式

Reference

在--deploy-mode集群上运行作业时,您在运行的计算机上看不到输出(如果要打印某些内容)。

原因:您正在群集模式下运行作业,因此master将在群集中的一个节点上运行,并且输出将在同一台计算机上发出。

要检查输出,您可以使用以下命令在应用程序日志中查看它。

yarn logs -applicationId application_id

回答第二个疑问:

您可以将输入文件保存在任何位置(主节点/ HDFS)。

并行性完全取决于加载数据时创建的RDD / DataFrame的分区数。 分区数取决于数据大小,但您可以通过在加载数据时传递参数来控制。

如果要从master加载数据:

 val rdd =   sc.textFile("/home/ubumtu/input.txt",[number of partitions])
将使用您传递的分区数创建

rdd。如果你没有传递多个分区,那么它将考虑在spark conf中配置spark.default.parallelism

如果要从HDFS加载数据:

 val rdd =  sc.textFile("hdfs://namenode:8020/data/input.txt")
将创建

rdd,其分区数等于HDFS内的数字块。

希望我的回答可以帮到你。

答案 1 :(得分:0)

您可以使用此:

spark-submit --deploy-mode client --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar