如何将发出的元组写入kafka主题

时间:2016-01-19 13:21:24

标签: apache-kafka apache-storm kafka-producer-api

应用程序正在从一个Kafka主题中读取消息,并且在MongoDB中存储并进行一些验证后,它正在写入另一个主题。在这里,我面临着应用程序进入无限循环的问题。 我的代码如下:

int ver(char* v)

在上面的代码中,Hosts zkHosts = new ZkHosts("localhost:2181"); String zkRoot = "/brokers/topics" ; String clientRequestID = "reqtest"; String clientPendingID = "pendtest"; SpoutConfig kafkaRequestConfig = new SpoutConfig(zkHosts,"reqtest",zkRoot,clientRequestID); SpoutConfig kafkaPendingConfig = new SpoutConfig(zkHosts,"pendtest",zkRoot,clientPendingID); kafkaRequestConfig.scheme = new SchemeAsMultiScheme(new StringScheme()); kafkaPendingConfig.scheme = new SchemeAsMultiScheme(new StringScheme()); KafkaSpout kafkaRequestSpout = new KafkaSpout(kafkaRequestConfig); KafkaSpout kafkaPendingSpout = new KafkaSpout(kafkaPendingConfig); MongoBolt mongoBolt = new MongoBolt() ; DeviceFilterBolt deviceFilterBolt = new DeviceFilterBolt() ; KafkaRequestBolt kafkaReqBolt = new KafkaRequestBolt() ; abc1DeviceBolt abc1DevBolt = new abc1DeviceBolt() ; DefaultTopicSelector defTopicSelector = new DefaultTopicSelector(xyzKafkaTopic.RESPONSE.name()) ; KafkaBolt kafkaRespBolt = new KafkaBolt() .withTopicSelector(defTopicSelector) .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper()) ; TopologyBuilder topoBuilder = new TopologyBuilder(); topoBuilder.setSpout(xyzComponent.KAFKA_REQUEST_SPOUT.name(), kafkaRequestSpout); topoBuilder.setSpout(xyzComponent.KAFKA_PENDING_SPOUT.name(), kafkaPendingSpout); topoBuilder.setBolt(xyzComponent.KAFKA_PENDING_BOLT.name(), deviceFilterBolt, 1) .shuffleGrouping(xyzComponent.KAFKA_PENDING_SPOUT.name()) ; topoBuilder.setBolt(xyzComponent.abc1_DEVICE_BOLT.name(), abc1DevBolt, 1) .shuffleGrouping(xyzComponent.KAFKA_PENDING_BOLT.name(), xyzDevice.abc1.name()) ; topoBuilder.setBolt(xyzComponent.MONGODB_BOLT.name(), mongoBolt, 1) .shuffleGrouping(xyzComponent.abc1_DEVICE_BOLT.name(), xyzStreamID.KAFKARESP.name()); topoBuilder.setBolt(xyzComponent.KAFKA_RESPONSE_BOLT.name(), kafkaRespBolt, 1) .shuffleGrouping(xyzComponent.abc1_DEVICE_BOLT.name(), xyzStreamID.KAFKARESP.name()); Config config = new Config() ; config.setDebug(true); config.setNumWorkers(1); Properties props = new Properties(); props.put("metadata.broker.list", "localhost:9092"); props.put("serializer.class", "kafka.serializer.StringEncoder"); props.put("request.required.acks", "1"); config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props); LocalCluster cluster = new LocalCluster(); try{ cluster.submitTopology("demo", config, topoBuilder.createTopology()); } 正在将数据写入主题。 KAFKA_RESPONSE_BOLT通过发送如下数据来提供此abc1_DEVICE_BOLT

KAFKA_RESPONSE_BOLT

1 个答案:

答案 0 :(得分:1)

长期以来,我一直被同样的问题困扰,答案非常简单......你不会相信它。

据我了解,KafkaBolt的实现必须接收元组的字段名称为“message”,无论它是Bolt还是Spout。所以你必须对你的代码做一些修改,我没有仔细看过。(但我相信这会有所帮助!)

具体原因在https://mail-archives.apache.org/mod_mbox/incubator-storm-user/201409.mbox/%3C6AF1CAC6-60EA-49D9-8333-0343777B48A7@andrashatvani.com%3E

处说明