在尝试保留数据帧时,我遇到了内存不足错误,我并不十分了解为什么。我有一个大约20Gb的数据帧,具有250万行和大约20列。过滤此数据框后,我有4列和50万行。
现在我的问题是,当我保留过滤后的数据帧时,出现内存不足错误(超过了25.4Gb的20 Gb物理内存使用)。我尝试过在不同的存储级别上坚持
df = spark.read.parquet(path) # 20 Gb
df_filter = df.select('a', 'b', 'c', 'd').where(df.a == something) # a few Gb
df_filter.persist(StorageLevel.MEMORY_AND_DISK)
df_filter.count()
我的集群有8个节点,每个节点具有30Gb的内存。
您是否知道OOM可能来自哪里?
答案 0 :(得分:1)
只有一些建议可以帮助您确定根本原因...
您可能有一个(或一个组合)...
# to check num partitions
df_filter.rdd.getNumPartitions()
# to repartition (**does cause shuffle**) to increase parallelism and help with data skew
df_filter.repartition(...) # monitor/debug performance in spark ui after setting
# check via
spark.sparkContext.getConf().getAll()
# these are the ones you want to watch out for
'''
--num-executors
--executor-cores
--executor-memory
'''
# debug directed acyclic graph [dag]
df_filter.explain() # also "babysit" in spark UI to examine performance of each node/partitions to get specs when you are persisting
# check output partitions if shuffle occurs
spark.conf.get("spark.sql.shuffle.partitions")