https://spark.apache.org/docs/latest/streaming-programming-guide.html#output-operations-on-dstreams的火花流媒体网站提到了以下代码:
dstream.foreachRDD { rdd =>
rdd.foreachPartition { partitionOfRecords =>
// ConnectionPool is a static, lazily initialized pool of connections
val connection = ConnectionPool.getConnection()
partitionOfRecords.foreach(record => connection.send(record))
ConnectionPool.returnConnection(connection) // return to the pool for future reuse
}
}
我试图使用org.apache.commons.pool2来实现它,但是运行应用程序失败了,带有预期的java.io.NotSerializableException:
15/05/26 08:06:21 ERROR OneForOneStrategy: org.apache.commons.pool2.impl.GenericObjectPool
java.io.NotSerializableException: org.apache.commons.pool2.impl.GenericObjectPool
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
...
我想知道实现可序列化的连接池是多么现实。有没有人成功过这个?
谢谢。
答案 0 :(得分:12)
为了解决这个“本地资源”问题,我们需要的是一个单例对象 - 即保证在JVM中只实例化一次且仅一次实例化的对象。幸运的是,Scala object
提供了开箱即用的功能。
要考虑的第二件事是,这个单例将为在托管它的同一个JVM上运行的所有任务提供服务,因此,必须负责并发和资源管理。
让我们尝试草拟(*)这样的服务:
class ManagedSocket(private val pool: ObjectPool, val socket:Socket) {
def release() = pool.returnObject(socket)
}
// singleton object
object SocketPool {
var hostPortPool:Map[(String, Int),ObjectPool] = Map()
sys.addShutdownHook{
hostPortPool.values.foreach{ // terminate each pool }
}
// factory method
def apply(host:String, port:String): ManagedSocket = {
val pool = hostPortPool.getOrElse{(host,port), {
val p = ??? // create new pool for (host, port)
hostPortPool += (host,port) -> p
p
}
new ManagedSocket(pool, pool.borrowObject)
}
}
然后用法变为:
val host = ???
val port = ???
stream.foreachRDD { rdd =>
rdd.foreachPartition { partition =>
val mSocket = SocketPool(host, port)
partition.foreach{elem =>
val os = mSocket.socket.getOutputStream()
// do stuff with os + elem
}
mSocket.release()
}
}
我假设问题中使用的GenericObjectPool
正在处理并发问题。否则,需要通过某种形式的同步来保护对每个pool
实例的访问。
(*)代码用于说明如何设计此类对象的想法 - 需要额外的努力才能转换为工作版本。
答案 1 :(得分:2)
以下答案错误!
我在这里留下答案供参考,但答案是错误的,原因如下。 socketPool
被声明为lazy val
,因此每次第一次访问请求都会对其进行实例化。由于SocketPool案例类不是Serializable
,这意味着它将在每个分区中实例化。这使得连接池无用,因为我们希望保持跨分区和RDD的连接。它作为伴侣对象或案例类实现没有区别。底线是:连接池必须是Serializable
,而apache commons pool不是。
import java.io.PrintStream
import java.net.Socket
import org.apache.commons.pool2.{PooledObject, BasePooledObjectFactory}
import org.apache.commons.pool2.impl.{DefaultPooledObject, GenericObjectPool}
import org.apache.spark.streaming.dstream.DStream
/**
* Publish a Spark stream to a socket.
*/
class PooledSocketStreamPublisher[T](host: String, port: Int)
extends Serializable {
lazy val socketPool = SocketPool(host, port)
/**
* Publish the stream to a socket.
*/
def publishStream(stream: DStream[T], callback: (T) => String) = {
stream.foreachRDD { rdd =>
rdd.foreachPartition { partition =>
val socket = socketPool.getSocket
val out = new PrintStream(socket.getOutputStream)
partition.foreach { event =>
val text : String = callback(event)
out.println(text)
out.flush()
}
out.close()
socketPool.returnSocket(socket)
}
}
}
}
class SocketFactory(host: String, port: Int) extends BasePooledObjectFactory[Socket] {
def create(): Socket = {
new Socket(host, port)
}
def wrap(socket: Socket): PooledObject[Socket] = {
new DefaultPooledObject[Socket](socket)
}
}
case class SocketPool(host: String, port: Int) {
val socketPool = new GenericObjectPool[Socket](new SocketFactory(host, port))
def getSocket: Socket = {
socketPool.borrowObject
}
def returnSocket(socket: Socket) = {
socketPool.returnObject(socket)
}
}
您可以按如下方式调用:
val socketStreamPublisher = new PooledSocketStreamPublisher[MyEvent](host = "10.10.30.101", port = 29009)
socketStreamPublisher.publishStream(myEventStream, (e: MyEvent) => Json.stringify(Json.toJson(e)))