与普通方法相比,通过最小散列的Spark Jaccard相似度计算较慢

时间:2016-04-01 13:43:46

标签: scala apache-spark apache-spark-mllib

鉴于2个巨大的值列表,我尝试使用Scala在Spark中计算它们之间的jaccard similarity

假设colHashed1包含第一个值列表,colHashed2包含第二个列表。

方法1(琐碎的方法):

val jSimilarity = colHashed1.intersection(colHashed2).distinct.count/(colHashed1.union(colHashed2).distinct.count.toDouble)

方法2(使用minHashing):

我使用了here解释的方法。

import java.util.zip.CRC32

def getCRC32 (s : String) : Int =
{
    val crc=new CRC32
    crc.update(s.getBytes)
    return crc.getValue.toInt & 0xffffffff
}

val maxShingleID = Math.pow(2,32)-1
def pickRandomCoeffs(kIn : Int) : Array[Int] =
{
  var k = kIn
  val randList = Array.fill(k){0}

  while(k > 0)
  {
    // Get a random shingle ID.

    var randIndex = (Math.random()*maxShingleID).toInt

    // Ensure that each random number is unique.
    while(randList.contains(randIndex))
    {
      randIndex = (Math.random()*maxShingleID).toInt
    }

    // Add the random number to the list.
    k = k - 1
    randList(k) = randIndex
   } 

   return randList
}

val colHashed1 = list1Values.map(a => getCRC32(a))
val colHashed2 = list2Values.map(a => getCRC32(a))

val nextPrime = 4294967311L
val numHashes = 10

val coeffA = pickRandomCoeffs(numHashes)
val coeffB = pickRandomCoeffs(numHashes)

var signature1 = Array.fill(numHashes){0}
for (i <- 0 to numHashes-1)
{
    // Evaluate the hash function.
    val hashCodeRDD = colHashed1.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))

    // Track the lowest hash code seen.
    signature1(i) = hashCodeRDD.min.toInt
}

var signature2 = Array.fill(numHashes){0}
for (i <- 0 to numHashes-1)
{
    // Evaluate the hash function.
    val hashCodeRDD = colHashed2.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))

    // Track the lowest hash code seen.
    signature2(i) = hashCodeRDD.min.toInt
}


var count = 0
// Count the number of positions in the minhash signature which are equal.
for(k <- 0 to numHashes-1)
{
  if(signature1(k) == signature2(k))
    count = count + 1
}  
val jSimilarity = count/numHashes.toDouble

方法1似乎在时间方面优于方法2 。当我分析代码时,方法2中min()上的RDD函数调用需要很长时间,并且根据使用的哈希函数的数量,该函数被调用多次。

与重复的min()函数调用相比,方法1中使用的交集和并集操作似乎更快。

我不明白为什么minHashing在这里没有帮助。我预计minHashing与普通方法相比工作得更快。我在这里做错了吗?

可以查看示例数据here

3 个答案:

答案 0 :(得分:0)

Jaccard与MinHash的相似性并未给出一致的结果:

import java.util.zip.CRC32

object Jaccard {
  def getCRC32(s: String): Int = {
    val crc = new CRC32
    crc.update(s.getBytes)
    return crc.getValue.toInt & 0xffffffff
  }

  def pickRandomCoeffs(kIn: Int, maxShingleID: Double): Array[Int] = {
    var k = kIn
    val randList = Array.ofDim[Int](k)

    while (k > 0) {
      // Get a random shingle ID.
      var randIndex = (Math.random() * maxShingleID).toInt
      // Ensure that each random number is unique.
      while (randList.contains(randIndex)) {
        randIndex = (Math.random() * maxShingleID).toInt
      }
      // Add the random number to the list.
      k = k - 1
      randList(k) = randIndex
    }
    return randList
  }


  def approach2(list1Values: List[String], list2Values: List[String]) = {

    val maxShingleID = Math.pow(2, 32) - 1

    val colHashed1 = list1Values.map(a => getCRC32(a))
    val colHashed2 = list2Values.map(a => getCRC32(a))

    val nextPrime = 4294967311L
    val numHashes = 10

    val coeffA = pickRandomCoeffs(numHashes, maxShingleID)
    val coeffB = pickRandomCoeffs(numHashes, maxShingleID)

    val signature1 = for (i <- 0 until numHashes) yield {
      val hashCodeRDD = colHashed1.map(ele => (coeffA(i) * ele + coeffB(i)) % nextPrime)
      hashCodeRDD.min.toInt // Track the lowest hash code seen.
    }

    val signature2 = for (i <- 0 until numHashes) yield {
      val hashCodeRDD = colHashed2.map(ele => (coeffA(i) * ele + coeffB(i)) % nextPrime)
      hashCodeRDD.min.toInt // Track the lowest hash code seen
    }

    val count = (0 until numHashes)
      .map(k => if (signature1(k) == signature2(k)) 1 else 0)
      .fold(0)(_ + _)


    val jSimilarity = count / numHashes.toDouble
    jSimilarity
  }


  //  def approach1(list1Values: List[String], list2Values: List[String]) = {
  //    val colHashed1 = list1Values.toSet
  //    val colHashed2 = list2Values.toSet
  //
  //    val jSimilarity = colHashed1.intersection(colHashed2).distinct.count / (colHashed1.union(colHashed2).distinct.count.toDouble)
  //    jSimilarity
  //  }


  def approach1(list1Values: List[String], list2Values: List[String]) = {
    val colHashed1 = list1Values.toSet
    val colHashed2 = list2Values.toSet

    val jSimilarity = (colHashed1 & colHashed2).size / (colHashed1 ++ colHashed2).size.toDouble
    jSimilarity
  }

  def main(args: Array[String]) {

    val list1Values = List("a", "b", "c")
    val list2Values = List("a", "b", "d")

    for (i <- 0 until 5) {
      println(s"Iteration ${i}")
      println(s" - Approach 1: ${approach1(list1Values, list2Values)}")
      println(s" - Approach 2: ${approach2(list1Values, list2Values)}")
    }

  }
}

<强>输出

Iteration 0
 - Approach 1: 0.5
 - Approach 2: 0.5
Iteration 1
 - Approach 1: 0.5
 - Approach 2: 0.5
Iteration 2
 - Approach 1: 0.5
 - Approach 2: 0.8
Iteration 3
 - Approach 1: 0.5
 - Approach 2: 0.8
Iteration 4
 - Approach 1: 0.5
 - Approach 2: 0.4

你为什么要用它?

答案 1 :(得分:0)

在我看来,minHashing方法的开销成本仅仅超过了它在Spark中的功能。特别是当numHashes增加时。 以下是我在代码中发现的一些观察结果:

首先,while (randList.contains(randIndex))这部分肯定会减慢你的进程,因为numHashes(它的方式等于randList的大小)增加了。

其次,如果您重写此代码,可以节省一些时间:

var signature1 = Array.fill(numHashes){0}
for (i <- 0 to numHashes-1)
{
    // Evaluate the hash function.
    val hashCodeRDD = colHashed1.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))

    // Track the lowest hash code seen.
    signature1(i) = hashCodeRDD.min.toInt
}

var signature2 = Array.fill(numHashes){0}
for (i <- 0 to numHashes-1)
{
    // Evaluate the hash function.
    val hashCodeRDD = colHashed2.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))

    // Track the lowest hash code seen.
    signature2(i) = hashCodeRDD.min.toInt
}


var count = 0
// Count the number of positions in the minhash signature which are equal.
for(k <- 0 to numHashes-1)
{
  if(signature1(k) == signature2(k))
    count = count + 1
}  

进入

var count = 0
for (i <- 0 to numHashes - 1)
{
    val hashCodeRDD1 = colHashed1.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))
    val hashCodeRDD2 = colHashed2.map(ele => ((coeffA(i) * ele + coeffB(i)) % nextPrime))

    val sig1 = hashCodeRDD1.min.toInt
    val sig2 = hashCodeRDD2.min.toInt

    if (sig1 == sig2) { count = count + 1 }
}

此方法将三个循环简化为一个循环。但是,我不确定这是否会大大增加计算时间。

我有另外一个建议,假设第一种方法结果更快,那就是使用集合的属性来修改第一种方法:

val colHashed1_dist = colHashed1.distinct
val colHashed2_dist = colHashed2.distinct
val intersect_cnt = colHashed1_dist.intersection(colHashed2_dist).distinct.count

val jSimilarity = intersect_cnt/(colHashed1_dist.count + colHashed2_dist.count - intersect_cnt).toDouble

用它来代替获得联合,你可以只重用交集的值。

答案 2 :(得分:0)

实际上,在LSH apporach中,您只需为每个文档计算一次minHash,然后比较每个可能的文档对的两个minHases。如果采用简单的方法,您可以对每个可能的文档对执行完整的文档比较。这大致是N ^ 2/2的比较数。因此,对于足够多的文档来说,计算minHashes的额外成本可以忽略不计。

你应该实际比较琐碎方法的表现:

val jSimilarity = colHashed1.intersection(colHashed2).distinct.count/(colHashed1.union(colHashed2).distinct.count.toDouble)

和Jaccard距离计算的性能(代码中的最后一行):

var count = 0
// Count the number of positions in the minhash signature which are equal.
for(k <- 0 to numHashes-1)
{
  if(signature1(k) == signature2(k))
    count = count + 1
}  
val jSimilarity = count/numHashes.toDouble