为什么Spark运行的内存少于可用内存?

时间:2015-11-19 05:44:58

标签: java apache-spark pyspark spark-streaming

我在具有32 GB RAM的计算机上运行带有Spark的单节点应用程序。 在我运行应用程序时,可以使用超过12GB的内存。

但是从火花UI和日志中,我看到它使用3.8GB的RAM(随着作业的运行逐渐减少)。

此时已记录,可以使用5GB以上的内存。 Spark使用3.8GB

的地方

更新

我在conf / spark-env.sh中设置了这些参数,但每次运行应用程序时仍然使用它正好使用3.8 GB

export SPARK_WORKER_MEMORY=6g
export SPARK_MEM=6g
export SPARK_DAEMON_MEMORY=6g

日志

2015-11-19 13:05:41,701 INFO org.apache.spark.SparkEnv.logInfo:59 - Registering MapOutputTracker

2015-11-19 13:05:41,716 INFO org.apache.spark.SparkEnv.logInfo:59 - Registering BlockManagerMaster

2015-11-19 13:05:41,735 INFO org.apache.spark.storage.DiskBlockManager.logInfo:59 - Created local directory at /usr/local/TC_SPARCDC_COM/temp/blockmgr-8513cd3b-ac03-4c0a-b291-65aba4cbc395

2015-11-19 13:05:41,746 INFO org.apache.spark.storage.MemoryStore.logInfo:59 - MemoryStore started with capacity 3.8 GB

2015-11-19 13:05:41,777 INFO org.apache.spark.HttpFileServer.logInfo:59 - HTTP File server directory is /usr/local/TC_SPARCDC_COM/temp/spark-b86380c2-4cbd-43d6-a3b7-aa03d9a05a84/httpd-ceaffbd0-eac4-447e-9d3f-c452627a28cb

2015-11-19 13:05:41,781 INFO org.apache.spark.HttpServer.logInfo:59 - Starting HTTP Server

2015-11-19 13:05:41,842 INFO org.spark-project.jetty.server.Server.doStart:272 - jetty-8.y.z-SNAPSHOT

2015-11-19 13:05:41,854 INFO org.spark-project.jetty.server.AbstractConnector.doStart:338 - Started SocketConnector@0.0.0.0:5279

2015-11-19 13:05:41,855 INFO org.apache.spark.util.Utils.logInfo:59 - Successfully started service 'HTTP file server' on port 5279.

2015-11-19 13:05:41,867 INFO org.apache.spark.SparkEnv.logInfo:59 - Registering OutputCommitCoordinator

2015-11-19 13:05:42,013 INFO org.spark-project.jetty.server.Server.doStart:272 - jetty-8.y.z-SNAPSHOT

2015-11-19 13:05:42,039 INFO org.spark-project.jetty.server.AbstractConnector.doStart:338 - Started SelectChannelConnector@0.0.0.0:4040

2015-11-19 13:05:42,039 INFO org.apache.spark.util.Utils.logInfo:59 - Successfully started service 'SparkUI' on port 4040.

2015-11-19 13:05:42,041 INFO org.apache.spark.ui.SparkUI.logInfo:59 - Started SparkUI at http://103.252.184.181:4040

2015-11-19 13:05:42,114 WARN org.apache.spark.metrics.MetricsSystem.logWarning:71 - Using default name DAGScheduler for source because spark.app.id is not set.

2015-11-19 13:05:42,117 INFO org.apache.spark.executor.Executor.logInfo:59 - Starting executor ID driver on host localhost

2015-11-19 13:05:42,307 INFO org.apache.spark.util.Utils.logInfo:59 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 31334.

2015-11-19 13:05:42,308 INFO org.apache.spark.network.netty.NettyBlockTransferService.logInfo:59 - Server created on 31334

2015-11-19 13:05:42,309 INFO org.apache.spark.storage.BlockManagerMaster.logInfo:59 - Trying to register BlockManager

2015-11-19 13:05:42,312 INFO org.apache.spark.storage.BlockManagerMasterEndpoint.logInfo:59 - Registering block manager localhost:31334 with 3.8 GB RAM, BlockManagerId(driver, localhost, 31334)

2015-11-19 13:05:42,313 INFO org.apache.spark.storage.BlockManagerMaster.logInfo:59 - Registered BlockManager

2 个答案:

答案 0 :(得分:2)

如果您使用的是SparkSubmit,则可以使用--executor-memory--driver-memory标志。否则,请直接在您的程序或spark-defaults中更改这些配置spark.executor.memoryspark.driver.memory

请注意,您不应将内存设置得太高。根据经验,目标是约75%的可用内存。这将为您机器上运行的其他进程(如您的操作系统)留下足够的内存。

答案 1 :(得分:1)

@Glennie Helles Sindholt正确说明,但在独立机器上提交作业时设置驱动程序标志不会影响使用,因为JVM已经初始化。查看此讨论链接:

How to set Apache Spark Executor memory

如果您正在使用Spark submit命令提交作业,请参阅提交作业时如何设置参数的示例:

spark-submit --master spark://127.0.0.1:7077 \
             --num-executors 2 \
             --executor-cores 8 \
             --executor-memory 3g \
             --class <Class name> \
             $JAR_FILE_NAME or path \
             /path-to-input \
             /path-to-output \

通过改变参数的数量,您可以看到并了解RAM的使用方式是如何变化的。此外,Linux上还有一个名为 htop 的实用程序。瞬时使用内存,CPU内核和交换空间对于了解正在发生的事情非常有用。要安装 htop ,请使用以下命令:

sudo apt-get install htop

它看起来像这样: htop utility

有关详细信息,请查看以下链接:

https://spark.apache.org/docs/latest/configuration.html