如何获得带有Alpakka且具有Commit Offset的Kafka批量大小的ConsumerRecord [String,String]列表?

时间:2018-07-24 08:04:17

标签: apache-kafka akka-stream alpakka

我需要从kafka获取记录,其批处理大小为Seq [ConsumerRecord [String,String]],我创建了此示例代码,但对我不起作用

Consumer.committableSource(consumerSettings, Subscriptions.topics("topic-1"))
  .groupedWithin(10, 10.seconds)
  .mapAsync(1) { group =>
    val msgs =  group.toList.map(_.record.value())
    saveToDB(msgs )
  }
  .map(group => group.foldLeft(CommittableOffsetBatch.empty) { (batch, elem) => 
  batch.updated(elem.committableOffset) 
  })
  .mapAsync(3)(_.commitScaladsl()) 
  .runWith(Sink.ignore)

并编写saveToDB(msgs)函数

def saveToDB(msgs: List[String]):Future[Done] = ???

但无法正常工作,并且出现此错误

Error:(48, 25) value foldLeft is not a member of akka.Done
.map(group => group.foldLeft(CommittableOffsetBatch.empty) { (batch, elem) =>

如何编写将代码正确运行的saveToDB ???

0 个答案:

没有答案