发送给分片演员时背压到ReactiveKafka

时间:2017-04-10 13:21:54

标签: akka apache-kafka actor reactive backpressure

我编写了一个Akka应用程序,它从Kafka获取输入,然后使用分片actor处理数据并输出到Kafka。

但在某些情况下,分片区域无法处理负载,我得到:

  

您应该实施流量控制以避免泛滥   远程连接。

如何在此链/流中实施背压?

Kafka Consumer - >共享演员 - >卡夫卡制片人

代码中的一些片段:

ReactiveKafka kafka = new ReactiveKafka();

Subscriber subscriber = kafka.publish(pp, system);

ActorRef kafkaWriterActor = (ActorRef) Source.actorRef(10000, OverflowStrategy.dropHead())
                .map(ix -> KeyValueProducerMessage.apply(Integer.toString(ix.hashCode()), ix))
                .to(Sink.fromSubscriber(subscriber))
                .run(materializer);

ConsumerProperties cp = new PropertiesBuilder.Consumer(brokerList, intopic, consumergroup, new ByteArrayDeserializer(), new NgMsgDecoder())
                        .build().consumerTimeoutMs(5000).commitInterval(Duration.create(60, TimeUnit.SECONDS)).readFromEndOfStream();

Publisher<ConsumerRecord<byte[], StreamEvent>> publisher = kafka.consume(cp,system);

ActorRef streamActor = ClusterSharding.get(system).start("StreamActor",
                Props.create(StreamActor.class, synctime), ClusterShardingSettings.create(system), messageExtractor);

shardRegionTypenames.add("StreamActor");


Source.fromPublisher(publisher)                
                .runWith(Sink.foreach(msg -> {                    
                    streamActor.tell(msg.value(),ActorRef.noSender());
                }), materializer);

1 个答案:

答案 0 :(得分:1)

也许您可以考虑将主题并行化为分区(如果适用),并通过调整this example中的ConsumerWithPerPartitionBackpressure来使用mapAsync and ask与您的演员集成来创建具有每分区背压的消费者。< / p>