在scala中没有列表的替代方式

时间:2017-07-27 05:06:39

标签: scala apache-spark

我有像这样的scala代码

 def avgCalc(buffer: Iterable[Array[String]], list: Array[String]) = {
    val currentTimeStamp = list(1).toLong // loads the timestamp column
    var sum = 0.0
    var count = 0
    var check = false
    import scala.util.control.Breaks._
    breakable {
      for (array <- buffer) {
        val toCheckTimeStamp = array(1).toLong // timestamp column
        if (((currentTimeStamp - 10L) <= toCheckTimeStamp) && (currentTimeStamp >= toCheckTimeStamp)) { // to check the timestamp for 10 seconds difference
          sum += array(5).toDouble // RSSI weightage values
          count += 1
        }

        if ((currentTimeStamp - 10L) > toCheckTimeStamp) {

          check = true
          break

        }
      }
    }
     list :+ sum

  }

我会像这样调用上面的函数

 import spark.implicits._
  val averageDF =
    filterop.rdd.map(_.mkString(",")).map(line => line.split(",").map(_.trim))
      .sortBy(array => array(1), false) // Sort by timestamp
      .groupBy(array => (array(0), array(2))) // group by tag and listner
      .mapValues(buffer => {
        buffer.map(list => {
         avgCalc(buffer, list) // calling the average function 
        })
      })
      .flatMap(x => x._2)
      .map(x => findingavg(x(0).toString, x(1).toString.toLong, x(2).toString, x(3).toString, x(4).toString, x(5).toString.toDouble, x(6).toString.toDouble)) // defining the schema through case class
      .toDF // converting to data frame

上面的代码工作正常。但我需要摆脱列表。我的高级要求我删除列表,因为列表降低了执行速度。任何建议继续没有列表? 任何帮助将不胜感激。

2 个答案:

答案 0 :(得分:4)

以下解决方案应该可行,我想,我试图避免传递iterable和一个数组。

def avgCalc(buffer: Iterable[Array[String]]) = {
  var finalArray = Array.empty[Array[String]]
  import scala.util.control.Breaks._
  breakable {
    for (outerArray <- buffer) {
      val currentTimeStamp = outerArray(1).toLong
      var sum = 0.0
      var count = 0
      var check = false
      var list = outerArray
      for (array <- buffer) {
        val toCheckTimeStamp = array(1).toLong
        if (((currentTimeStamp - 10L) <= toCheckTimeStamp) && (currentTimeStamp >= toCheckTimeStamp)) {
          sum += array(5).toDouble
          count += 1
        }
        if ((currentTimeStamp - 10L) > toCheckTimeStamp) {
          check = true
          break
        }
      }
      if (sum != 0.0 && check) list = list :+ (sum / count).toString
      else list = list :+ list(5).toDouble.toString

      finalArray ++= Array(list)
    }
  }
  finalArray
}

你可以称之为

import sqlContext.implicits._
val averageDF =
  filter_op.rdd.map(_.mkString(",")).map(line => line.split(",").map(_.trim))
    .sortBy(array => array(1), false)
    .groupBy(array => (array(0), array(2)))
    .mapValues(buffer => {
        avgCalc(buffer)
    })
    .flatMap(x => x._2)
    .map(x => findingavg(x(0).toString, x(1).toString.toLong, x(2).toString, x(3).toString, x(4).toString, x(5).toString.toDouble, x(6).toString.toDouble))
    .toDF

我希望这是理想的答案

答案 1 :(得分:1)

我可以看到你已经接受了答案,但我不得不说你有很多不必要的代码。据我所知,您没有理由首先将Array类型初始转换为sortBy类型,此时Row也是不必要的。我建议你直接在toDF上工作。

此外,您还有许多未使用的变量应该被移除并转换为案例类,只有import org.apache.spark.sql.Row def avgCalc(sortedList: List[Row]) = { sortedList.indices.map(i => { var sum = 0.0 val row = sortedList(i) val currentTimeStamp = row.getString(1).toLong // loads the timestamp column import scala.util.control.Breaks._ breakable { for (j <- 0 until sortedList.length) { if (j != i) { val anotherRow = sortedList(j) val toCheckTimeStamp = anotherRow.getString(1).toLong // timestamp column if (((currentTimeStamp - 10L) <= toCheckTimeStamp) && (currentTimeStamp >= toCheckTimeStamp)) { // to check the timestamp for 10 seconds difference sum += anotherRow.getString(5).toDouble // RSSI weightage values } if ((currentTimeStamp - 10L) > toCheckTimeStamp) { break } } } } (row.getString(0), row.getString(1), row.getString(2), row.getString(3), row.getString(4), row.getString(5), sum.toString) }) } val averageDF = filterop.rdd .groupBy(row => (row(0), row(2))) .flatMap{case(_,buffer) => avgCalc(buffer.toList.sortBy(_.getString(1).toLong))} .toDF("Tag", "Timestamp", "Listner", "X", "Y", "RSSI", "AvgCalc") 似乎过度恕我直言。

我会做这样的事情:

avgCalc

作为最终评论,我非常确定能够更好/更清晰地实现data = ["/example/path1", "/example/path2" ] 功能,但我会留给您玩弄那个:))