元素映射变坏

时间:2016-02-12 22:43:27

标签: java scala apache-spark distributed-computing rdd

我正在实施 k-means ,我想创建新的质心。但映射留下了一个元素!但是,如果K的值较小,如15,则可以正常工作。

基于code我有:

val K = 25 // number of clusters
val data = sc.textFile("dense.txt").map(
     t => (t.split("#")(0), parseVector(t.split("#")(1)))).cache()
val count = data.count()
println("Number of records " + count)

var centroids = data.takeSample(false, K, 42).map(x => x._2)
do {
  var closest = data.map(p => (closestPoint(p._2, centroids), p._2))
  var pointsGroup = closest.groupByKey()
  println(pointsGroup)
  pointsGroup.foreach { println }
  var newCentroids = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  //var newCentroids = pointsGroup.mapValues(ps => average(ps)).collectAsMap() this will produce an error
  println(centroids.size)
  println(newCentroids.size)
  for (i <- 0 until K) {
    tempDist += centroids(i).squaredDist(newCentroids(i))
  }
  ..

并且在for循环中,我将得到它不会找到元素的错误(它不总是相同的,它取决于K

  

java.util.NoSuchElementException:找不到密钥:2

错误出现前的输出:

Number of records 27776
ShuffledRDD[5] at groupByKey at kmeans.scala:72
25
24            <- IT SHOULD BE 25

有什么问题?

>>> println(newCentroids)
Map(23 -> (-0.0050852959701492536, 0.005512245104477607, -0.004460964477611937), 17 -> (-0.005459583045685268, 0.0029015278781725795, -8.451635532994901E-4), 8 -> (-4.691649213483123E-4, 0.0025375451685393366, 0.0063490755505617585), 11 -> (0.30361112034069937, -0.0017342255382385204, -0.005751167731061906), 20 -> (-5.839587918939964E-4, -0.0038189763756820145, -0.007067070459859708), 5 -> (-0.3787612396704685, -0.005814121628643806, -0.0014961713117870657), 14 -> (0.0024755681263616547, 0.0015191503267973836, 0.003411769193899781), 13 -> (-0.002657690932944597, 0.0077671050923225635, -0.0034652379980563263), 4 -> (-0.006963114731610361, 1.1751361829025871E-4, -0.7481135105367823), 22 -> (0.015318187079953534, -1.2929035958285013, -0.0044176372190034684), 7 -> (-0.002321059060773483, -0.006316359116022083, 0.006164669723756913), 16 -> (0.005341800955165691, -0.0017540737037037035, 0.004066574093567247), 1 -> (0.0024547379611650484, 0.0056298656504855955, 0.002504618082524296), 10 -> (3.421068671121009E-4, 0.0045169004751299275, 5.696239049740164E-4), 19 -> (-0.005453716071428539, -0.001450277556818192, 0.003860007248376626), 9 -> (-0.0032921685273631807, 1.8477108457711313E-4, -0.003070412228855717), 18 -> (-0.0026803160958904053, 0.00913904078767124, -0.0023528013698630146), 3 -> (0.005750011594202901, -0.003607098309178754, -0.003615918896940412), 21 -> (0.0024925166025641056, -0.0037607353461538507, -2.1588444871794858E-4), 12 -> (-7.920202960526356E-4, 0.5390774232894769, -4.928884539473694E-4), 15 -> (-0.0018608492323232324, -0.006973787272727284, -0.0027266663434343404), 24 -> (6.151173211963486E-4, 7.081812613784045E-4, 5.612962808842611E-4), 6 -> (0.005323933953732931, 0.0024014750473186123, -2.969338590956889E-4), 0 -> (-0.0015991676750160377, -0.003001317289659613, 0.5384176139563245))

有关错误的问题:spark scala throws java.util.NoSuchElementException: key not found: 0 exception

编辑:

在观察到零323后,两个质心相同,我改变了代码,使得 all 质心是唯一的。但是,行为保持不变。出于这个原因,我怀疑closestPoint()可能会为两个质心返回相同的索引。这是功能:

  def closestPoint(p: Vector, centers: Array[Vector]): Int = {
    var index = 0
    var bestIndex = 0
    var closest = Double.PositiveInfinity
    for (i <- 0 until centers.length) {
      val tempDist = p.squaredDist(centers(i))
      if (tempDist < closest) {
        closest = tempDist
        bestIndex = i
      }
    }
    return bestIndex
  }

如何逃脱这个?我正在运行我在Spark cluster中描述的代码。

1 个答案:

答案 0 :(得分:2)

它可能发生在“E步骤”(聚类索引的点的分配类似于EM算法的E步骤),您的一个索引将不会被分配任何点。如果发生这种情况,那么你需要有一种方法将该指数与某个点相关联,否则你将在“M步”之后结束更少的聚类(指数的质心分配类似于M- EM算法的一步。)像这样的东西应该可以工作:

val newCentroids = {
  val temp = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap()
  val nMissing = K - temp.size 
  val sample = data.takeSample(false, nMissing, seed)
  var c = -1
  (for (i <- 0 until K) yield {
   val point = temp.getOrElse(i, {c += 1; sample(c) })
   (i, point)
  }).toMap      
}   

只需将该代码替换为您当前用于计算newCentroids的行。

还有其他方法可以解决这个问题,上面的方法可能不是最好的(多次调用takeSample是一个好主意,每次迭代k-means算法一次?如果data包含大量重复值,等等,那该怎么办?),这是一个简单的起点。

顺便说一下,您可能想要考虑如何将groupByKey替换为reduceByKey

注意:对于好奇,这里有一个描述EM算法和k-means算法之间相似性的参考:http://papers.nips.cc/paper/989-convergence-properties-of-the-k-means-algorithms.pdf