我是卡夫卡,动物园管理员和水槽的新手。
我正在与这三个玩,并且已经完成了一个集群来测试它们的工作方式。
我的问题是:我可以使用哪种类型的拦截器来发送从主题1获取数据的kafka源信息,并且我想将其存储在主题2中同一kafka的接收器中,而无需主题1中正在重写数据? 这可能吗?
这是我的水槽:
SecondAgent.sources = fuenteKafka
SecondAgent.channels = canal2
SecondAgent.sinks = ContinenteNegro
################## config kafka source ############
SecondAgent.sources.fuenteKafka.channels = canal2
SecondAgent.sources.fuenteKafka.type = org.apache.flume.source.kafka.KafkaSource
SecondAgent.sources.fuenteKafka.zookeeperConnect = dragons:2181,dragons:2180,dragons:2182
SecondAgent.sources.fuenteKafka.topic = zoldick
SecondAgent.sources.fuentekafka.groupId = flume
SecondAgent.sources.fuenteKafka.batchSize = 1000
SecondAgent.sources.fuenteKafka.kafka.consumer.timeout.ms = 100
########### config channel#############
SecondAgent.channels.canal2.type = memory
SecondAgent.channels.canal2.capacity = 1000
SecondAgent.channels.canal2.transactionCapacity = 1000
####### config ContinenteNegro #########
SecondAgent.sinks.ContinenteNegro.channel = canal2
SecondAgent.sinks.ContinenteNegro.type = org.apache.flume.sink.kafka.KafkaSink
SecondAgent.sinks.ContinenteNegro.topic = freecss
SecondAgent.sinks.ContinenteNegro.brokerList = dragons:9092, dragons:9093, dragons:9094
SecondAgent.sinks.ContinenteNegro.batchSize = 100
SecondAgent.sinks.ContinenteNegro.requiredAcks = -1