PySpark:线程中的异常“dag-scheduler-event-loop”java.lang.OutOfMemoryError:Java堆空间

时间:2018-03-05 15:10:19

标签: python pyspark k-means

我正在尝试使用StringIndexerOneHotEncoderVectorAssembler将分类值转换为数值,以便在PySpark中应用K-means聚类。这是我的代码:

indexers = [
    StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c))
    for c in columnList
]

encoders = [OneHotEncoder(dropLast=False, inputCol=indexer.getOutputCol(),
                          outputCol="{0}_encoded".format(indexer.getOutputCol()))
            for indexer in indexers
            ]

assembler = VectorAssembler(inputCols=[encoder.getOutputCol() for encoder in encoders], outputCol="features")


pipeline = Pipeline(stages=indexers + encoders + [assembler])
model = pipeline.fit(df)
transformed = model.transform(df)

kmeans = KMeans().setK(2).setFeaturesCol("features").setPredictionCol("prediction")
kMeansPredictionModel = kmeans.fit(transformed)

predictionResult = kMeansPredictionModel.transform(transformed)
predictionResult.show(5)

我得到了Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space。如何在代码中分配更多的堆空间或更好?分配更多空间是否明智?我可以将程序限制为可用的线程数和堆空间吗?

1 个答案:

答案 0 :(得分:0)

我遇到了同样的问题。越来越多的允许进程为用户提供帮助。运行例如:

ulimit -u 4096