Spark中的每个文档字数

时间:2015-02-02 03:13:48

标签: scala apache-spark

我正在学习Spark(在Scala中)并且一直在试图弄清楚如何计算文件每一行上的所有单词。 我正在使用数据集,其中每行包含一个以制表符分隔的document_id和文档的全文

doc_1   <full-text>
doc_2   <full-text>
etc..

以下是我在名为doc.txt

的文件中的玩具示例
doc_1   new york city new york state
doc_2   rain rain go away

认为我需要做的是转换为元组containsig

((doc_id, word), 1)

然后调用reduceByKey()来加1。我写了以下内容:

val file = sc.textFile("docs.txt")
val tuples = file.map(_.split("\t"))
            .map( x => (x(1).split("\\s+")
            .map(y => ((x(0), y), 1 ))   ) )

这给了我认为我需要的中间代表:

tuples.collect

res0: Array[Array[((String, String), Int)]] = Array(Array(((doc_1,new),1), ((doc_1,york),1), ((doc_1,city),1), ((doc_1,new),1), ((doc_1,york),1), ((doc_1,state),1)), Array(((doc_2,rain),1), ((doc_2,rain),1), ((doc_2,go),1), ((doc_2,away),1)))

但是如果在元组上调用reduceByKey会产生错误

tuples.reduceByKey(_ + )
<console>:21: error: value reduceByKey is not a member of org.apache.spark.rdd.RDD[Array[((String, String), Int)]]
              tuples.reduceByKey(_ + )

我似乎无法理解如何做到这一点。我我需要对数组内的数组进行简化。我尝试了很多不同的东西但是像上面那样不断出错并且没有任何进展。 任何关于此的指导/建议都将非常感激。

注意:我知道https://spark.apache.org/examples.html上有一个单词计数示例,显示如何获取文件中所有单词的计数。但那是整个输入文件。我正在谈论获取每个文档的计数,其中每个文档在不同的行上。

3 个答案:

答案 0 :(得分:3)

reduceByKey期望输入RDD[(K,V)],而您在第一个split中执行map的那一刻,最终会得到一个RDD[Array[...]],而不是需要的类型签名。您可以按照以下方式重新编写当前的解决方案......但它可能不会像使用flatMap在返回的代码之后读取的那样:

//Dummy data load
val file = sc.parallelize(List("doc_1\tnew york city","doc_2\train rain go away"))  

//Split the data on tabs to get an array of (key, line) tuples
val firstPass = file.map(_.split("\t"))

//Split the line inside each tuple so you now have an array of (key, Array(...)) 
//Where the inner array is full of (word, 1) tuples
val secondPass = firstPass.map(x=>(x(0), x(1).split("\\s+").map(y=>(y,1)))) 

//Now group the words and re-map so that the inner tuple is the wordcount
val finalPass = secondPass.map(x=>(x._1, x._2.groupBy(_._1).map(y=>(y._1,y._2.size))))

可能是更好的解决方案vvvv:

如果您想保留当前的结构,则需要更改为从一开始就使用Tuple2,然后在flatMap之后使用//Load your data val file = sc.parallelize(List("doc_1\tnew york city","doc_2\train rain go away")) //Turn the data into a key-value RDD (I suggest caching the split, kept 1 line for SO) val firstPass = file.map(x=>(x.split("\t")(0), x.split("\t")(1))) //Change your key to be a Tuple2[String,String] and the value is the count val tuples = firstPass.flatMap(x=>x._2.split("\\s+").map(y=>((x._1, y), 1)))

{{1}}

答案 1 :(得分:0)

这是一个关于非常小的数据集的快速演示。

scala> val file = sc.textFile("../README.md")
15/02/02 00:32:38 INFO MemoryStore: ensureFreeSpace(32792) called with curMem=45512, maxMem=278302556
15/02/02 00:32:38 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 32.0 KB, free 265.3 MB)
file: org.apache.spark.rdd.RDD[String] = ../README.md MappedRDD[7] at textFile at <console>:12

scala> val splitLines = file.map{ line => line.split(" ") } 
splitLines: org.apache.spark.rdd.RDD[Array[String]] = MappedRDD[9] at map at <console>:14

scala> splitLines.map{ arr => arr.toList.groupBy(identity).map{ x => (x._1, x._2.size) } }
res19: org.apache.spark.rdd.RDD[scala.collection.immutable.Map[String,Int]] = MappedRDD[10] at map at <console>:17

scala> val result = splitLines.map{ arr => arr.toList.groupBy(identity).map{ x => (x._1, x._2.size) } }
result: org.apache.spark.rdd.RDD[scala.collection.immutable.Map[String,Int]] = MappedRDD[11] at map at <console>:16

scala> result.take(10).foreach(println)

Map(# -> 1, Spark -> 1, Apache -> 1)
Map( -> 1)
Map(for -> 1, is -> 1, Data. -> 1, system -> 1, a -> 1, provides -> 1, computing -> 1, cluster -> 1, general -> 1, Spark -> 1, It -> 1, fast -> 1, Big -> 1, and -> 1)
Map(in -> 1, Scala, -> 1, optimized -> 1, APIs -> 1, that -> 1, Java, -> 1, high-level -> 1, an -> 1, Python, -> 1, and -> 2, engine -> 1)
Map(for -> 1, data -> 1, a -> 1, also -> 1, general -> 1, supports -> 2, It -> 1, graphs -> 1, analysis. -> 1, computation -> 1)
Map(for -> 1, set -> 1, tools -> 1, rich -> 1, Spark -> 1, structured -> 1, including -> 1, of -> 1, and -> 1, higher-level -> 1, SQL -> 2)
Map(GraphX -> 1, for -> 2, processing, -> 2, data -> 1, MLlib -> 1, learning, -> 1, machine -> 1, graph -> 1)
Map(for -> 1, Streaming -> 1, processing. -> 1, stream -> 1, Spark -> 1, and -> 1)
Map( -> 1)
Map(<http://spark.apache.org/> -> 1)

答案 2 :(得分:0)

map