使用JDBC连接到Apache Phoenix,使用Play2中的Slick连接到Postgres

时间:2018-01-17 07:41:51

标签: scala jdbc playframework slick phoenix

我有一个Play框架应用程序,想要同时使用Slick连接到Postgres,同时使用JDBC连接到Apache Phoenix。

与Postgres的连接效果很好。但是我无法从Play连接到Phoenix。

我在一个独立的Scala应用程序中测试了与Phoenix的连接,没有Play,这很有用。

以下是独立应用程序的代码段:

import java.sql._

object TestPhoenix extends App {

  val connectionString = "jdbc:phoenix:srv1,srv2,srv3,srv4:/hbase"

  val request = "select * from MY_TABLE limit 10"

  Class.forName("org.apache.phoenix.jdbc.PhoenixDriver")

  val conn = DriverManager.getConnection(connectionString)

  val stmt = conn.prepareStatement(request)

  val rs = stmt.executeQuery()

  var hasNext = rs.next()

  while (hasNext) {
    println(rs.getBytes(1))
    hasNext = rs.next()
  }

}

我尝试在Play应用程序的控制器中使用相同的代码,但这不起作用。如果我尝试与Slick建立连接,它也不起作用:

class DbRequester(connectionString: String, request: String)(implicit val ec: ExecutionContext){

  Class.forName("org.apache.phoenix.jdbc.PhoenixDriver")

  val db: JdbcBackend.DatabaseDef = Database.forURL(connectionString, driver="org.apache.phoenix.jdbc.PhoenixDriver")
  val conn = db.source.createConnection()

  val stmt = conn.prepareStatement(request)

  def sendRequest() = {

    stmt.executeQuery()
  }

}

有一个堆栈跟踪:

2018-01-17 08:26:38.690 [error] - org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection-0x284bba4a-0x2009aa21ee3093c, quorum=srv1:2181,srv2:2181,srv3:2181,srv4:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/table/MY_TABLE
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:354)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:624)
    at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.getTableState(ZKTableStateClientSideReader.java:185)
    at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.isDisabledTable(ZKTableStateClientSideReader.java:59)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:127)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:981)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1150)
    at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
    at org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:154)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.prepare(ScannerCallableWithReplicas.java:376)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:135)
    at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[INFO] [01/17/2018 08:26:37.033] [sbt-web-scheduler-1] [akka.actor.ActorSystemImpl(sbt-web)] starting new LARS thread
[ERROR] [SECURITY][01/17/2018 08:26:38.684] [sbt-web-scheduler-1] [akka.actor.ActorSystemImpl(sbt-web)] Uncaught error from thread [sbt-web-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[sbt-web]
java.lang.OutOfMemoryError: GC overhead limit exceeded

[INFO] [01/17/2018 08:26:38.690] [Thread-2] [CoordinatedShutdown(akka://sbt-web)] Starting coordinated shutdown from JVM shutdown hook
[ERROR] [01/17/2018 08:26:38.684] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] exception on LARS’ timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded

2018-01-17 08:26:40.607 [error] - akka.actor.ActorSystemImpl - exception on LARS’ timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded
Uncaught error from thread [play-actors-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[play-actors]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Uncaught error from thread [play-dev-mode-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[play-dev-mode]
java.lang.OutOfMemoryError: GC overhead limit exceeded
2018-01-17 08:26:46.072 [error] - akka.actor.ActorSystemImpl - Uncaught error from thread [play-actors-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play-actors]
java.lang.OutOfMemoryError: GC overhead limit exceeded
[INFO] [01/17/2018 08:26:40.601] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] starting new LARS thread
[ERROR] [SECURITY][01/17/2018 08:26:46.073] [play-dev-mode-scheduler-1] [akka.actor.ActorSystemImpl(play-dev-mode)] Uncaught error from thread [play-dev-mode-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play-dev-mode]
java.lang.OutOfMemoryError: GC overhead limit exceeded

2018-01-17 08:26:48.784 [warn] - com.zaxxer.hikari.pool.HikariPool - db - Thread starvation or clock leap detected (housekeeper delta=58s114ms).
[WARN] [01/17/2018 08:26:58.809] [play-dev-mode-shutdown-hook-1] [CoordinatedShutdown(akka://play-dev-mode)] CoordinatedShutdown from JVM shutdown failed: Futures timed out after [10000 milliseconds]
[WARN] [01/17/2018 08:26:58.809] [Thread-2] [CoordinatedShutdown(akka://sbt-web)] CoordinatedShutdown from JVM shutdown failed: Futures timed out after [10000 milliseconds]

Process finished with exit code 255

此外,我尝试将jdbc依赖项添加到我的build.sbt中,由于这个原因,这也不起作用:https://www.playframework.com/documentation/2.6.x/PlaySlickFAQ#A-binding-to-play.api.db.DBApi-was-already-configured

并且没有办法与标准的Slick DatabaseConfig建立连接,因为Slick没有Apache Phoenix的配置文件。

那么,有没有办法连接凤凰城,并将Slick保留在我的项目中?

0 个答案:

没有答案