递归值outputColumns需要type - spark-scala

时间:2016-12-15 02:29:24

标签: scala apache-spark

我收到错误"递归值outputColumns需要类型"在代码下运行时。任何人都可以帮助我。

   import sqlContext.implicits._

   import org.apache.spark.sql.types.StringType

   val zipArrays = udf { seqs: Seq[Seq[String]] => for(i <- seqs.head.indices) yield seqs.fold(Seq.empty)((accu, seq) => accu :+ seq(i)) }

   val columnsToSelect = Seq($"CP_PAY_MADE_ON", $"CP_PRV_TIN", $"CP_PAYER_835_ID")

   val columnsToZip = Seq($"CLM_STR_DT", $"CLM_END_DT")

   val outputColumns = columnsToSelect ++ columnsToZip.zipWithIndex.map { case (column, index) => $"col".getItem(index).as(column.toString())

   val output = payment_summary_new_columns.select($"CP_PAY_MADE_ON", $"CP_PRV_TIN", $"CP_PAYER_835_ID", explode(zipArrays(array(columnsToZip: _*)))).select(outputColumns:_*) //gives error recursive value outputColumns needs type

   output.show()

0 个答案:

没有答案