如何' Chunk and Re-assmble'使用Akka-Stream在Reactive Kafka中发送大量消息

时间:2016-09-30 23:54:39

标签: scala apache-kafka akka-stream reactive-kafka

使用Kafka发送大文件时,是否可以跨分区分发,然后使用Akka-Stream重新组装?如本演示文稿中所述:

http://www.slideshare.net/JiangjieQin/handle-large-messages-in-apache-kafka-58692297

1 个答案:

答案 0 :(得分:2)

" chunking"一方,即制作人,很容易用reactive kafka

之类的东西写作
case class LargeMessage(bytes : Seq[Byte], topic : String)

def messageToKafka(message : LargeMessage, maxMessageSize : Int) = 
  Source.fromIterator(() => message.bytes.toIterator)
        .via(Flow[Byte].grouped(maxMessageSize))
        .via(Flow[Seq[Byte]].map(seq => new ProducerRecord(message.topic, seq)))
        .runWith(Producer.plainSink(producerSettings)

"重新组装"即消费者,可以类似于the documentation的方式实施:

   val messageFut : Future[LargeMessage] = 
     for {
       bytes <- Consumer.map(_._1).runWith(Sink.seq[Byte])
     } yield LargeMessage(bytes, topic)