使用覆盖模式时,Spark不会删除MemSql中的旧数据

时间:2018-05-29 09:59:40

标签: apache-spark memsql

我正在使用覆盖模式运行spark作业。我原以为它会删除表中的数据并插入新数据。然而,它只是将数据附加到它。

我期待与在fileSystem中使用save moce override相同的行为,

object HiveToMemSQL {
def main(args: Array[String]) {

    val log = Logger.getLogger(HiveToMemSQL.getClass)

    //var options = getOptions()
    //val cmdLineArgs = new CommandLineOptions().validateArguments(args, options)

    //if (cmdLineArgs != null) {

    // Get command line options values
    var query = "select * from default.students"
    // Get destination DB details from command line
    val destHostName ="localhost"
    //val destUserName = cmdLineArgs.getOptionValue("destUserName")
    //val destPassword = cmdLineArgs.getOptionValue("destPassword")
    val destDBName ="tsg"
    val destTable = "ORC_POS_TEST"
    val destPort = 3308
    val destConnInfo = MemSQLConnectionInfo(destHostName, destPort, "root", "", destDBName)

    val spark = SparkSession.builder().appName("Hive To MemSQL")
    .config("maxRecordsPerBatch" ,"100")
    .config("spark.memsql.host", destConnInfo.dbHost)
    .config("spark.memsql.port", destConnInfo.dbPort.toString)
    .config("spark.memsql.user", destConnInfo.user)
    .config("spark.memsql.password", destConnInfo.password)
    .config("spark.memsql.defaultDatabase", destConnInfo.dbName)
    //          .config("org.apache.spark.sql.SaveMode" , SaveMode.Overwrite.toString())
    .config("spark.memsql.defaultSaveMode"  , "Overwrite")
    .config("maxRecordsPerBatch" ,"100").master("local[*]").enableHiveSupport().getOrCreate()

    import spark.implicits._
    import spark.sql

    // Queries are expressed in HiveQL
    val sqlDF = spark.sql("select* from tsg.v_pos_krogus_wk_test")
    log.info("Successfully read data from source")
    sqlDF.printSchema()
    sqlDF.printSchema()

    // MemSQL destination DB Master Aggregator, Port, Username and Password
    import spark.implicits._

    // Disabling writing to leaf nodes directly
    var saveConf = SaveToMemSQLConf(spark.memSQLConf,
    params = Map("useKeylessShardingOptimization" -> "false", 
                 "writeToMaster" -> "false" , 
                 "saveMode" -> SaveMode.Overwrite.toString()))

    log.info("Save mode before  :" + saveConf.saveMode )
    saveConf= saveConf.copy(saveMode=SaveMode.Overwrite)
    log.info("Save mode after  :" + saveConf.saveMode )

    val tableIdent = TableIdentifier(destDBName, destTable)
    sqlDF.saveToMemSQL(tableIdent, saveConf)

    log.info("Successfully completed writing to MemSQL DB")
}}

1 个答案:

答案 0 :(得分:1)

MemSQL Spark Connector设置将写入REPLACE语句。 REPLACE的工作原理与INSERT完全相同,只是如果表中的旧行与PRIMARY KEY的新行具有相同的值,则在插入新行之前删除旧行。见https://docs.memsql.com/sql-reference/v6.0/replace/