提交风暴多节点拓扑

时间:2015-06-19 09:34:47

标签: java bigdata apache-kafka apache-storm

我会执行Horton Works的拓扑,以模拟卡车事件。 它在单个节点(localhost)上完美地工作,但现在我将它用于多节点。

事件制作人工作,我可以毫无问题地阅读该主题。

但是当我运行风暴拓扑时,它崩溃了......

我唯一的修改是在event_topology.properties中:

kafka.zookeeper.host.port=10.0.0.24:2181
#Kafka topic to consume.
kafka.topic=vehicleevent
#Location in ZK for the Kafka spout to store state.
kafka.zkRoot=/vehicle_event_spout
#Kafka Spout Executors.
spout.thread.count=1

#hdfs bolt settings
hdfs.path=/vehicle-events
hdfs.url=hdfs://10.0.0.24:8020
hdfs.file.prefix=vehicleEvents
#data will be moved from hdfs to the hive partition
#on the first write after the 5th minute.
hdfs.file.rotation.time.minutes=5

#hbase bolt settings
hbase.persist.all.events=true

#hive settings
hive.metastore.url=thrift://10.0.0.23:9083
hive.staging.table.name=vehicle_events_text_partition
hive.database.name=default

经过多次尝试,我在topology.java中测试了一个修改:

 final Config conf = new Config();
        conf.setDebug(true);
        conf.put(Config.NIMBUS_HOST, "10.0.0.23");
        conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
        conf.put(Config.STORM_ZOOKEEPER_PORT, 2181);
        conf.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList("10.0.0.24", "10.0.0.23"));

        //        StormSubmitter.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());

        final LocalCluster cluster = new LocalCluster();

        cluster.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());
        Utils.waitForSeconds(10);
        cluster.killTopology(TOPOLOGY_NAME);
        cluster.shutdown();
    }

如果您有任何建议,非常感谢:)

0 个答案:

没有答案