风暴中的DRPC服务器错误

时间:2014-12-12 03:58:53

标签: hadoop apache-storm trident

我正在尝试执行以下代码并收到错误..不确定我是否在这里遗漏了什么..还在哪里可以看到输出?

错误

java.lang.RuntimeException:没有为拓扑配置DRPC服务器     at backtype.storm.drpc.DRPCSpout.open(DRPCSpout.java:79)     在storm.trident.spout.RichSpoutBatchTriggerer.open(RichSpoutBatchTriggerer.java:58)     at backtype.storm.daemon.executor $ fn__5802 $ fn__5817.invoke(executor.clj:519)     at backtype.storm.util $ async_loop $ fn__442.invoke(util.clj:434)     在clojure.lang.AFn.run(AFn.java:24)     在java.lang.Thread.run(Thread.java:744)

Code:
----
package com.**.trident.storm;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import storm.kafka.*;
import storm.trident.*;

import backtype.storm.*;

public class EventTridentDrpcTopology
{
private static final String KAFKA_SPOUT_ID = "kafkaSpout";  

private static final Logger log = LoggerFactory.getLogger(EventTridentDrpcTopology.class);

public static StormTopology buildTopology(OpaqueTridentKafkaSpout spout) throws Exception
{
    TridentTopology tridentTopology = new TridentTopology();
    TridentState ts = tridentTopology.newStream("event_spout",spout)
    .name(KAFKA_SPOUT_ID)
    .each(new Fields("mac_address"), new SplitMac(), new Fields("mac"))
    .groupBy(new Fields("mac"))
    .persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("maccount"))
    .parallelismHint(4)
    ;

    tridentTopology
    .newDRPCStream("mac_count")
    .each(new Fields("args"), new SplitMac(), new Fields("mac"))
    .stateQuery(ts,new Fields("mac"),new MapGet(), new Fields("maccount"))
    .each(new Fields("maccount"), new FilterNull())
    .aggregate(new Fields("maccount"), new Sum(), new Fields("sum"))
     ;

return tridentTopology.build();

}

public static void main(String[] str) throws Exception
{
    Config conf = new Config();
    BrokerHosts hosts = new ZkHosts("xxxx:2181,xxxx:2181,xxxx:2181");
    String topic = "event";
    //String zkRoot = topologyConfig.getProperty("kafka.zkRoot");
    String consumerGroupId = "StormSpout";

    DRPCClient drpc = new DRPCClient("xxxx",3772);


    TridentKafkaConfig tridentKafkaConfig = new TridentKafkaConfig(hosts, topic, consumerGroupId);
    tridentKafkaConfig.scheme = new SchemeAsMultiScheme(new XScheme()); 
    OpaqueTridentKafkaSpout opaqueTridentKafkaSpout = new OpaqueTridentKafkaSpout(tridentKafkaConfig);


    StormSubmitter.submitTopology("event_trident", conf, buildTopology(opaqueTridentKafkaSpout));

}

}

1 个答案:

答案 0 :(得分:1)

您必须配置DRPC服务器的位置并启动它们。 请参阅http://storm.apache.org/releases/0.10.0/Distributed-RPC.html

上的远程模式DRPC

启动DRPC服务器 配置DRPC服务器的位置 将DRPC拓扑提交给Storm集群 可以使用storm脚本启动DRPC服务器,就像启动Nimbus或UI一样:

bin / storm drpc

接下来,您需要配置Storm群集以了解DRPC服务器的位置。这就是DRPCSpout如何知道从何处读取函数调用。这可以通过storm.yaml文件或拓扑配置来完成。通过storm.yaml进行配置看起来像这样:

drpc.servers:    - " drpc1.foo.com"    - " drpc2.foo.com"