运行Spark MLlib kmeans时得到OutOfMemory

时间:2016-07-22 23:12:26

标签: apache-spark machine-learning apache-spark-mllib apache-spark-ml

当我在大数据集上运行Spark Kmeans时,我总是出现OutOfMemory错误。训练集大约250GB,我有10个节点火花簇每台机器有16个cpus和150G内存。我给每个节点100GB的内存和50 cpus。我将集群中心设置为100,迭代次数为5.但是当代码在以下行上运行时,我得到了OutOfMemory:

val model = KMeans.train(parsedData, numClusters, numIterations)

是否有任何参数我可以调整以解决问题。

如果我设置较小的集群中心号或迭代号,那就没问题了。

我的代码如下:

val originalData = sc.textFile("hdfs://host/input.txt").cache()
val tupleData = originalData.map { x => (x.split(":")(0),x.split(":")(1)) }
val parsedData = tupleData.map { x => x._1 }.map(s => Vectors.dense(s.split(',').map(_.toDouble)))

val model = KMeans.train(parsedData, numClusters, numIterations, 1, initializationMode = KMeans.RANDOM)
val resultRdd = tupleData.map { p => (model.predict(Vectors.dense(p._1.split(',').map(_.toDouble))),p._2)}
resultRdd.sortByKey(true, 1).saveAsTextFile("hdfs://host/output.txt")

我的输入格式如下:

0.0,0.0,91.8,21.67,0.0 ... (the element number is 100K)
1.1,1.08,19.8,0.0,0.0 ... 
0.0,0.08,19.8,0.0,0.0 ...
...
The rows number is 600K.

我得到的例外如下:

scheduler.DAGScheduler: Submitting ShuffleMapStage 42 (MapPartitionsRDD[49] at map at KmeansTest.scala:47), which has no missing parents
Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:2271)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)

1 个答案:

答案 0 :(得分:4)

默认情况下,Spark的Kmeans实现使用K_MEANS_PARALLEL初始化模式。此模式的一部分在驱动程序计算机上运行,​​并且可能非常慢/导致驱动程序上的OOM,具体取决于您的数据。

尝试切换到RANDOM初始化模式。

val model = KMeans.train(parsedData, numClusters, numIterations, 1, initializationMode = KMeans.RANDOM)

尝试的另一件事是在提交应用程序时增加驱动程序内存。例如,使用以下命令将驱动程序内存设置为4G

spark-submit --conf "spark.driver.memory=4g" ...