Mesos集群上的Spark - 任务失败

时间:2015-09-01 07:32:32

标签: apache-spark spark-streaming mesos

我正在尝试在Mesos群集中运行Spark应用程序,其中我有一个主服务器和一个服务器。从站为Mesos分配了8GB RAM。主服务器正在运行Spark Mesos Dispatcher。

我使用以下命令提交Spark应用程序(这是一个流应用程序)。

spark-submit --master mesos://mesos-master:7077 --class com.verifone.media.ums.scheduling.spark.SparkBootstrapper --deploy-mode cluster scheduling-spark-0.5.jar

我看到以下输出显示已成功提交。

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/09/01 12:52:38 INFO RestSubmissionClient: Submitting a request to launch an application in mesos://mesos-master:7077.
15/09/01 12:52:39 INFO RestSubmissionClient: Submission successfully created as driver-20150901072239-0002. Polling submission state...
15/09/01 12:52:39 INFO RestSubmissionClient: Submitting a request for the status of submission driver-20150901072239-0002 in mesos://mesos-master:7077.
15/09/01 12:52:39 INFO RestSubmissionClient: State of driver driver-20150901072239-0002 is now QUEUED.
15/09/01 12:52:40 INFO RestSubmissionClient: Server responded with CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.4.1",
  "submissionId" : "driver-20150901072239-0002",
  "success" : true
}

然而,这在Mesos中失败了,当我查看Spark Cluster UI时,我看到以下消息。

task_id { value: "driver-20150901070957-0001" } state: TASK_FAILED message: "" slave_id { value: "20150831-082639-167881920-5050-4116-S6" } timestamp: 1.441091399975446E9 source: SOURCE_SLAVE reason: REASON_MEMORY_LIMIT 11: "\305-^E\377)N\327\277\361:\351\fm\215\312"

似乎它与内存有关,但我不确定我是否必须在这里配置一些东西以使其正常工作。

更新 我查看了奴隶中的mesos日志,我看到以下消息。

E0901 07:56:26.086618  1284 fetcher.cpp:515] Failed to run mesos-fetcher: Failed to fetch all URIs for container '33183181-e91b-4012-9e21-baa37485e755' with exit status: 256

所以我认为这可能是因为Spark Executor URL,所以我将spark-submit修改为如下并增加了驱动程序和slave的内存,但我仍然看到相同的错误。

spark-submit \
    --master mesos://mesos-master:7077 \
    --class com.verifone.media.ums.scheduling.spark.SparkBootstrapper \
    --deploy-mode cluster \
    --driver-memory 1G \
    --executor-memory 4G \
    --conf spark.executor.uri=http://d3kbcqa49mib13.cloudfront.net/spark-1.4.1-bin-hadoop2.6.tgz \
    scheduling-spark-0.5.jar

更新2

我按照@hartem的建议(参见评论)了解了这一点。任务现在正在运行,但实际的Spark应用程序仍未在群集中运行。当我查看日志时,我看到以下内容。在最后一行之后,似乎Spark没有继续进行。

15/09/01 10:33:41 INFO SparkContext: Added JAR file:/tmp/mesos/slaves/20150831-082639-167881920-5050-4116-S8/frameworks/20150831-082639-167881920-5050-4116-0004/executors/driver-20150901103327-0002/runs/47339c12-fb78-43d6-bc8a-958dd94d0ccf/spark-1.4.1-bin-hadoop2.6/../scheduling-spark-0.5.jar at http://192.172.1.31:33666/jars/scheduling-spark-0.5.jar with timestamp 1441103621639
I0901 10:33:41.728466  4375 sched.cpp:157] Version: 0.23.0
I0901 10:33:41.730764  4383 sched.cpp:254] New master detected at master@192.172.1.10:7077
I0901 10:33:41.730908  4383 sched.cpp:264] No credentials provided. Attempting to register without authentication

2 个答案:

答案 0 :(得分:1)

我有类似的问题是slave无法找到运行类文件(SparkPi)所需的jar。所以我给了它工作的jar的http URL,它要求jar放在分布式系统中而不是本地文件系统上。

/home/centos/spark-1.6.1-bin-hadoop2.6/bin/spark-submit \
  --name SparkPiTestApp \
  --class org.apache.spark.examples.SparkPi \
  --master mesos://xxxxxxx:7077 \
  --deploy-mode cluster \
  --executor-memory 5G --total-executor-cores 30 \
  http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.4.0-SNAPSHOT.jar 100

答案 1 :(得分:0)

你可以在启动奴隶之前导出GLOG_v = 1并查看从属日志中是否有任何有趣内容?我还会在slave工作目录下查找stdout和stderr文件,看看它们是否包含任何线索。