sparkR:验证号码正常工作的节点

时间:2016-11-20 10:23:43

标签: sparkr

启动spark-ec2集群后,我从/ root启动sparkR

$ ./spark/bin/sparkR

结果消息的几行包括:

16/11/20 10:13:51 WARN SparkConf: 
SPARK_WORKER_INSTANCES was detected (set to '1').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --num-executors to specify the number of executors
 - Or set SPARK_EXECUTOR_INSTANCES
 - spark.executor.instances to configure the number of instances in the spark config.

因此,根据该建议,我将最后一行添加到spark-defaults.conf

$ pwd
/root/spark/conf
$ cat spark-defaults.conf
spark.executor.memory   512m
spark.executor.extraLibraryPath /root/ephemeral-hdfs/lib/native/
spark.executor.extraClassPath   /root/ephemeral-hdfs/conf
spark.executor.instances 2

这导致不再打印消息。

在sparkR中,如何验证将被访问的工作节点数?

1 个答案:

答案 0 :(得分:0)

启动spark群集后,您可以在Master_IP:8080上检查当前的worker和执行者on spark ui,例如在本地localhost:8080 您还可以检查您的配置是否将在localhost:4040正确应用于环境选项卡