使用RStudio-sparklyr连接IntelliJ提供的本地Spark

时间:2017-11-11 01:21:03

标签: apache-spark intellij-idea rstudio sparklyr

早上好, 它可能听起来像一个愚蠢的问题,但我想访问Spark by RStudio的临时表。我没有任何Spark集群,我只在PC上运行本地的所有内容。 当我通过IntelliJ启动Spark时,实例运行正常:

    17/11/11 10:11:33 INFO Utils: Successfully started service 'sparkDriver' on port 59505.
17/11/11 10:11:33 INFO SparkEnv: Registering MapOutputTracker
17/11/11 10:11:33 INFO SparkEnv: Registering BlockManagerMaster
17/11/11 10:11:33 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/11 10:11:33 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/11 10:11:33 INFO DiskBlockManager: Created local directory at C:\Users\stephan\AppData\Local\Temp\blockmgr-7ca4e8fb-9456-4063-bc6d-39324d7dad4c
17/11/11 10:11:33 INFO MemoryStore: MemoryStore started with capacity 898.5 MB
17/11/11 10:11:33 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/11 10:11:33 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/11 10:11:34 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.25.240.1:4040
17/11/11 10:11:34 INFO Executor: Starting executor ID driver on host localhost
17/11/11 10:11:34 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59516.
17/11/11 10:11:34 INFO NettyBlockTransferService: Server created on 172.25.240.1:59516

但我不确定端口,我必须在RStudio / sparklyr中选择:

sc <- spark_connect(master = "spark://localhost:7077", spark_home = "C://Users//stephan//Downloads//spark//spark-2.2.0-bin-hadoop2.7", version = "2.2.0")
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
  cannot open file 'C:\Users\stephan\AppData\Local\Temp\Rtmp61Ejow\file2fa024ce51af_spark.log': Permission denied

我尝试了不同的端口,例如59516,4040,......但都导致了相同的结果。由于文件写得很好,我猜可以忽略权限被拒绝的消息:

17/11/11 01:07:30 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master localhost:7077

可以请任何人协助我,如何在本地运行的Spark和RStudio之间建立连接,但没有它,RStudio正在运行另一个Spark实例?

由于 斯蒂芬

1 个答案:

答案 0 :(得分:0)

运行独立Spark集群与在IDE中以local模式运行Spark不同,这可能就是这种情况。 local模式不会创建任何持久性服务。

运行您自己的“伪分布”群集:

  • 下载Spark二进制文件。
  • 使用$SPARK_HOME/sbin/start-master.sh脚本启动Spark master。
  • 使用$SPARK_HOME/sbin/start-slave.sh脚本启动Spark worker并传递master url。

为了能够共享表格,您还需要一个适当的Metastore(而不是Derby)。