Spark:计数共现 - 对大型集合进行有效多次通过过滤的算法

时间:2015-05-29 16:00:46

标签: algorithm scala group-by apache-spark filtering

这些图书有两列booksreaders,其中booksreaders分别是图书和读者ID:

   books readers
1:     1      30
2:     2      10
3:     3      20
4:     1      20
5:     1      10
6:     2      30

记录book = 1, reader = 30表示用户使用id = 1读取了包含id = 30的图书。 对于每个图书对,我需要使用此算法计算阅读两个这些图书的读者数量:

for each book
  for each reader of the book
    for each other_book in books of the reader
      increment common_reader_count ((book, other_book), cnt)

使用此算法的优势在于,与将所有图书组合计算为2相比,它需要少量操作

为了实现上述算法,我将这些数据组织成两组:1)用书键入,包含每本书的读者的RDD和2)由读者键入的RDD,包含每个读者读取的书籍,如下所示程序:

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.log4j.Logger
import org.apache.log4j.Level

object Small {

  case class Book(book: Int, reader: Int)
  case class BookPair(book1: Int, book2: Int, cnt:Int)

  val recs = Array(
    Book(book = 1, reader = 30),
    Book(book = 2, reader = 10),
    Book(book = 3, reader = 20),
    Book(book = 1, reader = 20),
    Book(book = 1, reader = 10),
    Book(book = 2, reader = 30))

  def main(args: Array[String]) {
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
    // set up environment
    val conf = new SparkConf()
      .setAppName("Test")
      .set("spark.executor.memory", "2g")
    val sc = new SparkContext(conf)
    val data = sc.parallelize(recs)

    val bookMap = data.map(r => (r.book, r))
    val bookGrps = bookMap.groupByKey

    val readerMap = data.map(r => (r.reader, r))
    val readerGrps = readerMap.groupByKey

    // *** Calculate book pairs
    // Iterate book groups 
    val allBookPairs = bookGrps.map(bookGrp => bookGrp match {
      case (book, recIter) =>
        // Iterate user groups 
        recIter.toList.map(rec => {
          // Find readers for this book
          val aReader = rec.reader
          // Find all books (including this one) that this reader read
          val allReaderBooks = readerGrps.filter(readerGrp => readerGrp match {
            case (reader2, recIter2) => reader2 == aReader
          })
          val bookPairs = allReaderBooks.map(readerTuple => readerTuple match {
            case (reader3, recIter3) => recIter3.toList.map(rec => ((book, rec.book), 1))
          })
          bookPairs
        })

    })
    val x = allBookPairs.flatMap(identity)
    val y = x.map(rdd => rdd.first)
    val z = y.flatMap(identity)
    val p = z.reduceByKey((cnt1, cnt2) => cnt1 + cnt2)
    val result = p.map(bookPair => bookPair match {
      case((book1, book2),cnt) => BookPair(book1, book2, cnt)
    } )

    val resultCsv = result.map(pair => resultToStr(pair))
    resultCsv.saveAsTextFile("./result.csv")
  }

   def resultToStr(pair: BookPair): String = {
     val sep = "|"
    pair.book1 + sep + pair.book2 + sep + pair.cnt
  }
}

这种实现实际上会导致不同的低效算法!

for each book
  find each reader of the book scanning all readers every time!
    for each other_book in books of the reader
      increment common_reader_count ((book, other_book), cnt)

与上述算法的主要目标相矛盾,因为它不是减少,而是增加了操作次数。查找用户书籍需要过滤每本书的所有用户。因此操作次数~N * M其中N - 用户数和M - 账簿数。

问题:

  1. 有没有办法在Spark中实现原始算法而不为每本书过滤完整的读者集合?
  2. 计算图书对的任何其他算法都能有效计算?
  3. 此外,当实际运行此代码时,我得到filter exception这是我无法弄清楚的原因。有任何想法吗?
  4. 请参阅下面的例外日志:

    15/05/29 18:24:05 WARN util.Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0)
    15/05/29 18:24:05 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    15/05/29 18:24:09 INFO slf4j.Slf4jLogger: Slf4jLogger started
    15/05/29 18:24:10 INFO Remoting: Starting remoting
    15/05/29 18:24:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.2.15:38910]
    15/05/29 18:24:10 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.0.2.15:38910]
    15/05/29 18:24:12 ERROR executor.Executor: Exception in task 0.0 in stage 6.0 (TID 4)
    java.lang.NullPointerException
        at org.apache.spark.rdd.RDD.filter(RDD.scala:282)
        at Small$$anonfun$4$$anonfun$apply$1.apply(Small.scala:58)
        at Small$$anonfun$4$$anonfun$apply$1.apply(Small.scala:54)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at Small$$anonfun$4.apply(Small.scala:54)
        at Small$$anonfun$4.apply(Small.scala:51)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:137)
        at org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:58)
        at org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:55)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:54)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
    

    更新

    此代码:

    val df = sc.parallelize(Array((1,30),(2,10),(3,20),(1,10)(2,30))).toDF("books","readers")
    val results = df.join(
    df.select($"books" as "r_books", $"readers" as "r_readers"), 
    $"readers" === $"r_readers" and $"books" < $"r_books"
    )
    .groupBy($"books", $"r_books")
    .agg($"books", $"r_books", count($"readers"))
    

    给出以下结果:

    books r_books COUNT(readers)
    1     2       2     
    

    所以COUNT这里有两本书(这里是1和2)被一起阅读(对数)。

1 个答案:

答案 0 :(得分:8)

如果将原始RDD转换为DataFrame,这种事情会容易得多:

val df = sc.parallelize(
  Array((1,30),(2,10),(3,20),(1,10), (2,30))
).toDF("books","readers")

一旦你这样做,只需在DataFrame上进行自我加入以制作书籍对,然后计算读取每本书对的读者数量:

val results = df.join(
  df.select($"books" as "r_books", $"readers" as "r_readers"), 
  $"readers" === $"r_readers" and $"books" < $"r_books"
).groupBy(
  $"books", $"r_books"
).agg(
  $"books", $"r_books", count($"readers")
)

关于该联接的其他说明,请注意我正在加入df自我加入:自我加入:df.join(df.select(...), ...)。你要做的是将书#1 - $"books"与第二本书$"r_books"拼接在一起 - 来自同一位读者 - $"reader" === $"r_reader"。但是,如果你只加入$"reader" === $"r_reader",那么你就可以将同一本书加入到自身中。相反,我使用$"books" < $"r_books"来确保图书对中的排序始终为(<lower_id>,<higher_id>)

一旦你进行了连接,就会得到一个DataFrame,每个图书对的每个读者都有一行。 groupByagg函数实际计算每本图书配对的读者数量。

顺便提一下,如果读者两次读同一本书,我相信你最终会重复计算,这可能是也可能不是你想要的。如果这不是您想要的,只需将count($"readers")更改为countDistinct($"readers")

如果您想进一步了解agg函数count()countDistinct()以及其他一些有趣的内容,请查看org.apache.spark.sql.functions

的scaladoc