Slick 3.0批量插入或更新(upsert)

时间:2016-01-25 20:14:26

标签: mysql sql scala slick typesafe

在Slick 3.0中执行批量insertOrUpdate的正确方法是什么?

我正在使用MySQL,其中相应的查询将是

package httpsserver

import java.security.{SecureRandom, KeyStore}
import javax.net.ssl.{KeyManagerFactory, SSLContext}

import akka.actor.ActorSystem
import akka.http.scaladsl.{HttpsContext, Http}
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server.Route
import akka.stream.ActorMaterializer

object Server extends App {

  val serverContext: HttpsContext = {
    val password = "abcdef".toCharArray
    val context = SSLContext.getInstance("TLS")
    val ks = KeyStore.getInstance("PKCS12")
    ks.load(getClass.getClassLoader.getResourceAsStream("keys/server.p12"), password)
    val keyManagerFactory = KeyManagerFactory.getInstance("SunX509")
    keyManagerFactory.init(ks, password)
    context.init(keyManagerFactory.getKeyManagers, null, new SecureRandom)
    // start up the web server
    HttpsContext(context)
  }

  implicit val system = ActorSystem("server")
  implicit val materializer = ActorMaterializer()
  import system._

  val route = Route(complete("ok"))

  Http().bindAndHandle(route, interface = "0.0.0.0", port = 8081, httpsContext = Some(serverContext))
}

MySQL bulk INSERT or UPDATE

这是我目前的代码非常慢: - (

INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);

我正在寻找的是等同于

// FIXME -- this is slow but will stop repeats, an insertOrUpdate
// functions for a list would be much better
val rowsInserted = rows.map {
  row => await(run(TableQuery[FooTable].insertOrUpdate(row)))
}.sum

3 个答案:

答案 0 :(得分:37)

有几种方法可以让这段代码更快(每一个应该比前面的代码更快,但它逐渐减少了惯用的代码):

  • 如果在slick-pg 0.16.1 +

    上运行insertOrUpdateAll而不是insertOrUpdate
    await(run(TableQuery[FooTable].insertOrUpdateAll rows)).sum
    
  • 一次运行您的DBIO事件,而不是在运行下一个之前等待每个事件提交:

    val toBeInserted = rows.map { row => TableQuery[FooTable].insertOrUpdate(row) }
    val inOneGo = DBIO.sequence(toBeInserted)
    val dbioFuture = run(inOneGo)
    // Optionally, you can add a `.transactionally`
    // and / or `.withPinnedSession` here to pin all of these upserts
    // to the same transaction / connection
    // which *may* get you a little more speed:
    // val dbioFuture = run(inOneGo.transactionally)
    val rowsInserted = await(dbioFuture).sum
    
  • 下拉到JDBC级别并一次性运行upsert(idea via this answer):

    val SQL = """INSERT INTO table (a,b,c) VALUES (?, ?, ?)
    ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);"""
    
    SimpleDBIO[List[Int]] { session =>
      val statement = session.connection.prepareStatement(SQL)
      rows.map { row =>
        statement.setInt(1, row.a)
        statement.setInt(2, row.b)
        statement.setInt(3, row.c)
        statement.addBatch()
      }
      statement.executeBatch()
    }
    

答案 1 :(得分:1)

正如您在Slick examples所看到的,您可以使用++=函数来使用JDBC批量插入功能进行插入。每个实例:

val foos = TableQuery[FooTable]
val rows: Seq[Foo] = ...
foos ++= rows // here slick will use batch insert

您还可以通过“分组”行序列来“批量”批量处理:

val batchSize = 1000
rows.grouped(batchSize).foreach { group => foos ++= group }

答案 2 :(得分:0)

使用sqlu

这个演示工作

case ("insertOnDuplicateKey",answers:List[Answer])=>{
  def buildInsert(r: Answer): DBIO[Int] =
    sqlu"insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time) values (${r.aid},${r.bid},${r.sbid},${r.qid},${r.ups},${r.author},${r.uid},${r.nick},${r.pub_time},${r.content},${r.good},${r.hot},${r.id},${r.reply},${r.pic},${r.spider_time}) ON DUPLICATE KEY UPDATE `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val inserts: Seq[DBIO[Int]] = answers.map(buildInsert)
  val combined: DBIO[Seq[Int]] = DBIO.sequence(inserts)
  DEST_DB.run(combined).onComplete(data=>{
    println("insertOnDuplicateKey data result",data.get.mkString)
    if (data.isSuccess){
      println(data.get)
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      //retry
      self !("insertOnDuplicateKey",answers)
    }
  })
}

我尝试在单个sql中使用sqlu,但错误可能是sqlu不提供String Interpolation

这个演示不起作用

case ("insertOnDuplicateKeyError",answers:List[Answer])=>{
  def buildSql(execpre:String,values: String,execafter:String): DBIO[Int] = sqlu"$execpre $values $execafter"
  val execpre="insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time)  values "
  val execafter=" ON DUPLICATE KEY UPDATE  `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val valuesstr=answers.map(row=>("("+List(row.aid,row.bid,row.sbid,row.qid,row.ups,"'"+row.author+"'","'"+row.uid+"'","'"+row.nick+"'","'"+row.pub_time+"'","'"+row.content+"'",row.good,row.hot,row.id,row.reply,row.pic,"'"+row.spider_time+"'").mkString(",")+")")).mkString(",\n")
  val insertOrUpdateAction=DBIO.seq(
    buildSql(execpre,valuesstr,execafter)
  )
  DEST_DB.run(insertOrUpdateAction).onComplete(data=>{
    if (data.isSuccess){
      println("insertOnDuplicateKey data result",data)
      //retry
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      self !("insertOnDuplicateKey2",answers)
    }
  })
}

scala slick的mysql同步工具 https://github.com/cclient/ScalaMysqlSync