重新格式化scala代码和if / else语句

时间:2018-08-23 21:37:34

标签: scala apache-spark apache-spark-sql

我已经编写了结合Spark数据帧的Scala代码。 首先,它正在工作(仅当我不使用if else语句时)。虽然它不是一个干净的代码,但我想知道如何转换它?

第二,if / else语句不起作用,如何将类似python的值附加到上面的变量中,并在以后用作数据框?

对不起,我是Scala的新手。

    %scala

    for(n <- Scalaconfigs){
      var bulkCopyMetadata = new BulkCopyMetadata

      val sourceTable = n(0)
      val targetTable  = n(1) 

      println(sourceTable)
      println(targetTable)
      val df = spark.sql(s"SELECT * FROM ${sourceTable}")


      if (sourceTable == "est.Values"){
        val vs1 = df.withColumn("Duration", 'Duration.cast("double")).withColumn("StartUTC", 'StartTimeUTC.cast("bigint")).select('DeviceID, 'DeviceType, 'StartUTC, 'Duration as 'Duration)

      }
      else if  (sourceTable == "est.tests"){
         val vs1 = df.withColumn("DateUTC", 'DateUTC.cast("Timestamp")).select('ID, 'DateUTC as 'DateUTC)

      }

          val writeConfig = Config(Map(
            "url"               -> url,
            "databaseName"      -> databaseName,
            "dbTable"           -> targetTable,
            "user"              -> user, 
            "password"          -> password,
            "connectTimeout"    -> "5",
            "bulkCopyBatchSize" -> "100000",
            "bulkCopyTableLock" -> "true",
            "bulkCopyTimeout"   -> "600"
          ))

          vs1.bulkCopyToSqlDB(writeConfig)
          //vs1 doesnot take value, when i use if else statements


    }

找不到变量“ vs1”。那个错误。我知道它是因为变量vs1是在if else块中定义的,但是我如何在上面使用它。我试过放在上面,但是我不确定数据类型。

1 个答案:

答案 0 :(得分:1)

vs1在本地作用域内,外部不可见。在外部声明vs1,并尝试使用模式匹配代替if else

val vs1 = sourceTable match {
  case "est.Values" =>
    df.withColumn("Duration", 'Duration.cast("double"))
      .withColumn("StartUTC", 'StartTimeUTC.cast("bigint"))
      .select('DeviceID, 'DeviceType, 'StartUTC, 'Duration as 'Duration)
  case "est.tests" =>
   df.withColumn("DateUTC", 'DateUTC.cast("Timestamp"))
     .select('ID, 'DateUTC as 'DateUTC)
}