使用scala在spark应用程序中构建倒排索引

时间:2015-12-01 05:58:11

标签: scala csv apache-spark

我是Spark和scala编程语言的新手。我的输入是 CSV文件。我需要在csv文件中的值上构建一个反向索引,如下面的示例所示。

Input: file.csv

attr1, attr2, attr3
1,     AAA,    23
2,     BBB,    23
3,     AAA,    27

output format: value -> (rowid, collumnid) pairs
for example: AAA -> ((1,2),(3,2))
             27  -> (3,3)

我已经开始使用以下代码了。那之后我被困住了。请帮助。

object Main {
  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("Invert Me!").setMaster("local[2]")
    val sc = new SparkContext(conf)

    val txtFilePath = "/home/person/Desktop/sample.csv"

    val txtFile = sc.textFile(txtFilePath)
    val nRows = txtFile.count()

    val data = txtFile.map(line => line.split(",").map(elem => elem.trim()))
    val nCols = data.collect()(0).length

  }
}

1 个答案:

答案 0 :(得分:5)

保留您的风格的代码可能看起来像

val header = sc.broadcast(data.first())

val cells = data.zipWithIndex().filter(_._2 > 0).flatMap { case (row, index) =>
  row.zip(header.value).map { case (value, column) => value ->(column, index) }
}


val index: RDD[(String, Vector[(String, Long)])] = 
   cells.aggregateByKey(Vector.empty[(String, Long)])(_ :+ _, _ ++ _)

此处index值应包含CellValue(ColumnName, RowIndex)对的所需映射

上述方法中的下划线只是快捷方式的lambdas,它可以用另一种方式写成

val cellsVerbose = data.zipWithIndex().flatMap {
  case (row, 1) => IndexedSeq.empty // skipping header row
  case (row, index) => row.zip(header.value).map {
    case (value, column) => value ->(column, index)
  }
} 


val indexVerbose: RDD[(String, Vector[(String, Long)])] =
  cellsVerbose.aggregateByKey(zeroValue = Vector.empty[(String, Long)])(
    seqOp = (keys, key) => keys :+ key,
    combOp = (keysA, keysB) => keysA ++ keysB)