Apache Storm(本地)未连接到Apache Kafka(本地)

时间:2016-05-21 17:25:39

标签: apache-kafka apache-storm apache-zookeeper

我正在开发一个POC,它将读取来自Kafka的消息,并通过Storm实时处理它。我已经开始了当地的Zookeeper和Kafka。我创建了一个主题(名为test),生产者和消费者,他们在命令提示符下工作正常。现在我想使用Storm阅读主题中的消息。当我尝试运行以下代码时,Storm spout没有连接到Kafka / Zookeeper。这在日志中是显而易见的,因为在任何地方都没有提到localhost或2181。并且该过程失败,异常

  

6939 [Thread-15-eventsEmitter-executor [2 2]] INFO o.a.s.k.PartitionManager - 从以下位置读取分区信息:/ test / storm / partition_0 - >空

public class TestTopology {

    public static void main(String[] args) {

        BrokerHosts zkHosts = new ZkHosts("localhost:2181");
        SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, "test", "/test", "storm");
        kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
        KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
        TopologyBuilder builder = new TopologyBuilder();
        builder.setSpout("eventsEmitter", kafkaSpout, 1);
        builder.setBolt("eventsProcessor", new WordCountBolt(), 1).shuffleGrouping("eventsEmitter");
        Config config = new Config();
        config.setMaxTaskParallelism(5);
        /*
         * config.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 2);
         * 
         * config.put(Config.STORM_ZOOKEEPER_PORT, 2181);
         * config.put(Config.STORM_ZOOKEEPER_SERVERS,
         * Arrays.asList("localhost"));
         */

        try {
            ILocalCluster cls = new LocalCluster();         
            cls.submitTopology("my-topology", config, builder.createTopology());
        } catch (Exception e) {
            throw new IllegalStateException("Couldn't initialize the topology",
                    e);
        }
    }

}

它将创建它的本地ZooKeeper连接到运行Kafka的本地ZooKeeper

4632 [Thread-11] INFO  o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - Starting
4632 [Thread-11] INFO  o.a.s.s.o.a.z.ZooKeeper - Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState@acd1da
4633 [Thread-11-SendThread(127.0.0.1:2000)] INFO  o.a.s.s.o.a.z.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2000. Will not attempt to authenticate using SASL (unknown error)
4634 [Thread-11-SendThread(127.0.0.1:2000)] INFO  o.a.s.s.o.a.z.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:2000, initiating session
4634 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO  o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:62287
4634 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO  o.a.s.s.o.a.z.s.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:62287
4635 [SyncThread:0] INFO  o.a.s.s.o.a.z.s.ZooKeeperServer - Established session 0x154d458c4130011 with negotiated timeout 20000 for client /127.0.0.1:62287
4635 [Thread-11-SendThread(127.0.0.1:2000)] INFO  o.a.s.s.o.a.z.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2000, sessionid = 0x154d458c4130011, negotiated timeout = 20000
4635 [Thread-11-EventThread] INFO  o.a.s.s.o.a.c.f.s.ConnectionStateManager - State change: CONNECTED

如果您需要更多信息,请告诉我。

2 个答案:

答案 0 :(得分:1)

经过神经紧张的夜晚,我为此找到了解决方案。实际上问题不在于代码,而在于Jars。我从所有3个软件包中添加了log4j jar,即zookeeper,kafka和storm.But代码只期望一个。这在我的日食中显示为红色警告,我之前已经忽略了。当我删除了不必要的log4js时,kafka spout开始从我创建的Kafka主题中读取。谢谢大家花时间研究这个问题。 @Matthias我想,因为我把它链接到Zookeeper,它连接到该Zookeeper管理的任何kafka。所以提到在地方层面至少可能没有必要。但是还是要感谢..

答案 1 :(得分:0)

您必须配置config以便它知道Kafka服务器端口,例如:

Properties props = new Properties();
//default broker port = 9092
props.put("metadata.broker.list", "localhost:" + BROKER_PORT); 
props.put("request.required.acks", "1");
props.put("serializer.class", "kafka.serializer.StringEncoder");

Config config = new Config();        
config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
config.setDebug(true);
config.setMaxTaskParallelism(5);