计算分区Spark JDBC的上下限

时间:2019-04-08 10:15:16

标签: sql-server scala apache-spark spark-jdbc

我使用带有Scala的Spark-jdbc从MS SQL服务器读取数据,我想按指定的列对数据进行分区。我不想手动设置分区列的上限和下限。我可以在此字段中读取某种最大值和最小值并将其设置为上限/下限吗? 另外,使用此查询,我想从数据库中读取所有数据。 目前,查询机制如下:

def jdbcOptions() = Map[String,String](
    "driver" -> "db.driver",
    "url" -> "db.url",
    "user" -> "db.user",
    "password" -> "db.password",
    "customSchema" -> "db.custom_schema",
    "dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
    "partitionColumn" -> "db.partitionColumn",
    "lowerBound" -> "1",
    "upperBound" -> "30",
    "numPartitions" -> "5"
}

    val dataDF = sparkSession
      .read
      .format("jdbc")
      .options(jdbcOptions())
      .load()

1 个答案:

答案 0 :(得分:0)

如果dayColumn是数字或日期字段,则可以使用下一个代码检索边界:

def jdbcBoundOptions() = Map[String,String]{
    "driver" -> "db.driver",
    "url" -> "db.url",
    "user" -> "db.user",
    "password" -> "db.password",
    "customSchema" -> "db.custom_schema",
    "dbtable" -> "(select max(db.partitionColumn), min(db.partitionColumn) from TestAllData where dayColumn > 'dayValue') as subq",
    "numPartitions" -> "1"
}

val boundRow = sparkSession
    .read
    .format("jdbc")
    .options(jdbcBoundOptions())
    .load()
    .first()

val maxDay = boundRow.getInt(0)
val mimDay = boundRow.getInt(1)

请注意,numPartitions必须为1,在这种情况下,我们不需要指定分区详细信息,如Spark documentation中所述。

最后,您可以将检索到的边界用于原始查询:

def jdbcOptions() = Map[String,String]{
    "driver" -> "db.driver",
    "url" -> "db.url",
    "user" -> "db.user",
    "password" -> "db.password",
    "customSchema" -> "db.custom_schema",
    "dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
    "partitionColumn" -> "db.partitionColumn",
    "lowerBound" -> minDay.toString,
    "upperBound" -> maxDay.toString,
    "numPartitions" -> "5"
}