Spark结构化流:JDBC接收器中的主键

时间:2019-05-02 14:49:06

标签: mysql apache-spark apache-spark-sql spark-structured-streaming apache-spark-dataset

我正在使用更新模式下的结构化流从kafka主题读取数据流,然后进行一些转换。

然后我创建了一个jdbc接收器,以使用Append模式将数据推送到mysql接收器中。问题是如何告诉接收器知道这是我的主键,并根据它进行更新,这样我的表就不会有重复的行。

   val df: DataFrame = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "<List-here>")
  .option("subscribe", "emp-topic")
  .load()


  import spark.implicits._
  // value in kafka is bytes so cast it to String
  val empList: Dataset[Employee] = df.
  selectExpr("CAST(value AS STRING)")
  .map(row => Employee(row.getString(0)))

  // window aggregations on 1 min windows
  val aggregatedDf= ......

  // How to tell here that id is my primary key and do the update
  // based on id column
  aggregatedDf
  .writeStream
  .trigger(Trigger.ProcessingTime(60.seconds))
  .outputMode(OutputMode.Update)
  .foreachBatch { (batchDF: DataFrame, batchId: Long) =>
      batchDF
      .select("id", "name","salary","dept")
      .write.format("jdbc")
      .option("url", "jdbc:mysql://localhost/empDb")
      .option("driver","com.mysql.cj.jdbc.Driver")
      .option("dbtable", "empDf")
      .option("user", "root")
      .option("password", "root")
      .mode(SaveMode.Append)
      .save()
     }

2 个答案:

答案 0 :(得分:2)

一种方法是,您可以将ON DUPLICATE KEY UPDATEforeachPartition一起使用

下面是伪代码片段

/**
    * Insert in to database using foreach partition.
    * @param dataframe : DataFrame
    * @param sqlDatabaseConnectionString
    * @param sqlTableName
    */
  def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {

//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)

    val tableHeader: String = dataFrame.columns.mkString(",")
    dataFrame.foreachPartition { partition =>
      // Note : Each partition one connection (more better way is to use connection pools)
      val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
      //Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
      partition.grouped(1000).foreach {
        group =>
          val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
          group.foreach {
            record => insertString.append("('" + record.mkString(",") + "'),")
          }

val sql =   s"""
               | INSERT INTO $sqlTableName  VALUES  
               | $tableHeader
               | ${insertString}
               | ON DUPLICATE KEY UPDATE 
               | yourprimarykeycolumn='${record.getAs[String]("key")}'
    sqlExecutorConnection.createStatement()
                .executeUpdate(sql)
          }
    sqlExecutorConnection.close() // close the connection
        }
      }

您可以使用preparestatement代替jdbc语句。

进一步阅读:SPARK SQL - update MySql table using DataFrames and JDBC

答案 1 :(得分:0)

您知道吗,为什么我使用devdb的writestream和jdbc一起出现此错误?

java.lang.UnsupportedOperationException:数据源jdbc不支持流式写入

此外,我听说有一种解决方法是引入foreachBatch,我尝试使用.foreachBatch {((batchDF:DataFrame,batchId:Long)=> batchDF .writeStream ....,但出现此错误: foreachBatch的值不是org.apache.spark.sql.streaming.DataStreamWriter [org.apache.spark.sql.Row]

的成员