Spark失败,因为“从关闭钩子调用stop()”

时间:2016-07-20 22:17:43

标签: apache-spark pyspark spark-dataframe emr

在AWS EMR上运行Spark时遇到以下问题。在表上进行连接以过滤某些ID时,Spark突然死于stdout文件报告以下内容:

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe.

当我在我的机器上的本地模式(在一个小得多的数据集上)运行它时,我正在执行的命令正在运行正常运行,如下所示:

sampled_data = data_df \
                .join(sample_ids, data_df.entity_id == sample_ids.entity_id, 'inner') \
                .drop(data_df.entity_id) \
                .where(data_df.subcategory == 'main') \
                .select(['entity_id', 'date', 'hour', 'pageno', 'position']) \
                .dropDuplicates()

print 'sampled_data test...'
sampled_data.take(3)

可以在此处找到完整的错误日志(stderr):http://pastebin.com/cUrPUQcX。我已经过了几次但是找不到任何问题,只是突然发生这种情况,没有太多关于原因的信息:

16/07/20 21:47:08 INFO SparkContext: Invoking stop() from shutdown hook

另外,如果我在WebUI上检查执行程序的日志,我会看到以下内容:

[...]
16/07/22 15:34:43 INFO s3n.S3NativeFileSystem: Opening 's3n://path/2016-07-01/data_2016-07-01T12-03-35_node7.csv' for reading
16/07/22 15:34:43 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown
16/07/22 15:34:43 INFO storage.MemoryStore: MemoryStore cleared
16/07/22 15:34:43 INFO storage.BlockManager: BlockManager stopped
16/07/22 15:34:43 INFO s3n.S3NativeFileSystem: Opening 's3n://path/2016-07-01/data_2016-07-01T12-03-36_node5.csv' for reading
16/07/22 15:34:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/22 15:34:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/22 15:34:43 WARN executor.CoarseGrainedExecutorBackend: An unknown (ip-172-31-22-115.us-west-2.compute.internal:32836) driver disconnected.
16/07/22 15:34:43 ERROR executor.CoarseGrainedExecutorBackend: Driver 172.31.22.115:32836 disassociated! Shutting down.
16/07/22 15:34:43 INFO util.ShutdownHookManager: Shutdown hook called
16/07/22 15:34:43 INFO codegen.GenerateMutableProjection: Code generated in 23.143313 ms
16/07/22 15:34:43 INFO util.ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1469108763595_0005/spark-f9a9e3ba-1761-49d0-84b0-8711f1ca71f0

我也用“spark.executor.memory”初始化集群:“10G”和5个执行器。

任何建议都将受到赞赏。

0 个答案:

没有答案