在使用PySpark以orc格式编写大型数据帧时,如何避免OOM问题?

时间:2019-03-13 12:31:59

标签: python python-3.x apache-spark dataframe pyspark

我有两个脚本:a和b。在脚本“ a”中,将两个CSV文件读入两个数据帧,然后将其合并为一个结果数据帧,然后将其写入CSV文件。此任务不会因OOM问题而告终,而且速度很快:10亿行,100列,每个41.2 GB CSV文件需要8-9分钟。

另一个脚本“ b”在各个方面类似于“ a”,但一个方面是:书写格式。输入文件是相同的:1B行,100 cols,41.2 GB csv文件。该脚本以ORC格式保存结果数据帧。然后导致错误:

An error occurred while calling o91.orc. Job aborted due to stage failure: Task 36 in stage 4.0 failed 4 times, most recent failure: Lost task 36.3 in stage 4.0 (TID 800, ip-*-*-*-*.ap-south-1.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 5.6 GB of 5.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

csv读取orc的代码是:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql import DataFrameReader, DataFrameWriter
from datetime import datetime

import time

# @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

print("All imports were successful.")


df = spark.read.csv(
    's3://****',
    header=True
)
print("First dataframe read with headers set to True")

df2 = spark.read.csv(
    's3://****',
    header=True
)

print("Second data frame read with headers set to True")

# Obtain columns lists
left_cols = df.columns
right_cols = df2.columns

# Prefix each dataframe's field with "left_" or "right_"
df = df.selectExpr([col + ' as left_' + col for col in left_cols])
df2 = df2.selectExpr([col + ' as right_' + col for col in right_cols])

# Perform join
# df3 = df.alias('l').join(df2.alias('r'), on='l.left_c_0' == 'r.right_c_0')

# df3 = df.alias('l').join(df2.alias('r'), on='c_0')

df3 = df.join(
    df2,
    df["left_column_test_0"] == df2["right_column_test_0"]
)

print("Dataframes have been joined successfully.")
output_file_path = 's3://****

df3.write.orc(
    output_file_path
)

# print("Dataframe has been written to csv.")
job.commit()

我的csv文件是这样的:

0,1,2,3,4,.....99
1,2,3,4,......100
2,3,4,5,......101
.
.
.
.
[continues until the 1 billionth row]

如何确保我的代码不会引起任何OOM错误?

1 个答案:

答案 0 :(得分:0)

为了从OOM问题中恢复,我不得不对其进行重新分区。这样做的逻辑是,每个分区肯定会在OOM(根据我的数据)下。

此代码为: df3 = df3.repartition("left_column_test_0")

尽管如此,对于ORC文件格式,spark花费了更多时间:29分钟。我仍在调查为什么Orc的火花比csv慢的原因。