Spark on Mesos执行程序因OOM错误而失败

时间:2018-03-21 10:02:32

标签: java apache-spark apache-kafka mesos parquet

我们正在使用由DCOS系统管理的spark 2.0.2,该系统从Kafka 1.0.0消息服务中获取数据并在hdfs系统中编写镶木地板。 每件事都工作正常,但是当我们增加Kafka中的主题数量时,我们的Spark执行器开始因OOM错误而不断崩溃:

    java.lang.OutOfMemoryError: Java heap space
    at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
    at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
    at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
    at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
    at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
    at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
    at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
    at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
    at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
    at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
    at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
    at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
    at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
    at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
    at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
    at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
    at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
    at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
    at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
    at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
18/03/20 18:41:13 ERROR [Executor task launch worker-0] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
    at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
    at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
    at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
    at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
    at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
    at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
    at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
    at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
    at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
    at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
    at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
    at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
    at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
    at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
    at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
    at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
    at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
    at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
    at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
    at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
    at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)

我们尝试增加执行程序内存的可用性,查看代码,但我们找不到任何错误。

另一个信息:我们在spark中使用RDD。

有人遇到类似的问题,已经解决了

1 个答案:

答案 0 :(得分:0)

执行程序的堆配置是什么?默认情况下,Java将根据计算机内存自动调整其堆。您需要使用-Xmx setting将其更改为适合您的容器。

请参阅有关运行Java in the container

的文章

https://github.com/fabianenardon/docker-java-issues-demo/tree/master/memory-sample