使用PySpark生成6,000万个JSON文件时出现OutOfMemoryError

时间:2019-04-17 08:34:01

标签: apache-spark pyspark python-3.6

通过下面的PySpark代码,我可以通过jdbc连接从Oracle db成功生成6,000万条记录CSV文件。

然后我现在想要以JSON格式输出,因此我添加了以下代码行:df1.toPandas().to_json("/home/user1/empdata.json", orient='records'),但是在生成json时出现OutOfMemoryError。

如果需要任何代码更改,请推荐我。

from pyspark.sql import SparkSession

spark = SparkSession \
    .builder \
    .appName("Emp data Extract") \
    .config("spark.some.config.option", " ") \
    .getOrCreate()

def generateData():
    try:
        jdbcUrl = "jdbc:oracle:thin:USER/pwd@//hostname:1521/dbname"
        jdbcDriver = "oracle.jdbc.driver.OracleDriver"
        df1 = spark.read.format('jdbc').options(url=jdbcUrl, dbtable="(SELECT * FROM EMP) alias1", driver=jdbcDriver, fetchSize="2000").load()
        #df1.coalesce(1).write.format("csv").option("header", "true").save("/home/user1/empdata" , index=False)
        df1.toPandas().to_json("/home/user1/empdata.json", orient='records')
    except Exception as err:
        print(err)
        raise
    # finally:
    # conn.close()

if __name__ == '__main__':
    generateData()

错误日志:

2019-04-15 05:17:06 WARN  Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
[Stage 0:>                                                          (0 + 1) / 1]2019-04-15 05:20:22 ERROR Executor:91 - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:3236)
        at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
        at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
        at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
        at net.jpountz.lz4.LZ4BlockOutputStream.flushBufferedData(LZ4BlockOutputStream.java:220)
        at net.jpountz.lz4.LZ4BlockOutputStream.write(LZ4BlockOutputStream.java:173)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at org.apache.spark.sql.catalyst.expressions.UnsafeRow.writeToStream(UnsafeRow.java:552)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:256)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2019-04-15 05:20:22 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[Executor task launch worker for task 0,5,main]
java.lang.OutOfMemoryError: Java heap space

根据Admin的要求,我正在更新我的评论:这是一些不同的问题,其他outoutmemory问题也存在,但是在不同的情况下会出现。错误可能相同,但问题不同。就我而言,我要归功于海量数据。

1 个答案:

答案 0 :(得分:3)

如果要保存为JSON,则应使用Spark的write命令-当前要做的是将所有数据带入驱动程序,然后尝试将其加载到pandas数据帧中

df1.write.format('json').save('/path/file_name.json')

如果您只需要一个文件,则可以尝试

df1.coalesce(1).write.format('json').save('/path/file_name.json')