错误backtype.storm.util - 异步循环死了java.lang.RuntimeException:java.lang.RuntimeException

时间:2015-06-01 07:26:16

标签: java runtime-error apache-kafka apache-storm

我正在运行一个简单的单词计数拓扑使用kafka Storm集成。当我使用spout源作为文本文件时,我得到输出。但是当我使用kafka Spout时,我收到以下错误。

错误:

  

错误backtype.storm.util - 异步循环死了!   java.lang.RuntimeException:java.lang.RuntimeException:org.apache.zookeeper.KeeperException $ NoNodeException:KeeperErrorCode = / Brokers / topics / RandomQuery / partitions

的NoNode
7173 [Thread-4-SendThread(localhost:2006)] INFO  org.apache.storm.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2006. Will not attempt to authenticate using SASL (unknown error)
7174 [Thread-4-SendThread(localhost:2006)] INFO  org.apache.storm.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2006, initiating session
7175 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2006] WARN  org.apache.storm.zookeeper.server.NIOServerCnxn - caught end of stream exception
org.apache.storm.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x14dadefc40c0012, likely client has closed socket
at org.apache.storm.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) ~[storm-core-0.9.4.jar:0.9.4]
at org.apache.storm.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-0.9.4.jar:0.9.4]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
7175 [Thread-17-QueryCounter-EventThread] INFO  org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
7175 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2006] INFO  org.apache.storm.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:51264 which had sessionid 0x14dadefc40c0012
7176 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2006] INFO  org.apache.storm.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:51267
7176 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2006] INFO  org.apache.storm.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:51267
7190 [Thread-17-QueryCounter] ERROR backtype.storm.util - Async loop died!
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:42) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.KafkaSpout.open(KafkaSpout.java:87) ~[storm-kafka-0.9.4.jar:0.9.4]
at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:522) ~[storm-core-0.9.4.jar:0.9.4]
at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.4.jar:0.9.4]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:94) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:65) ~[storm-kafka-0.9.4.jar:0.9.4]
... 7 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) ~[curator-client-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) ~[curator-framework-2.5.0.jar:na]
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:91) ~[storm-kafka-0.9.4.jar:0.9.4]
... 8 common frames omitted
7191 [Thread-17-QueryCounter] ERROR backtype.storm.daemon.executor - 
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:42) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.KafkaSpout.open(KafkaSpout.java:87) ~[storm-kafka-0.9.4.jar:0.9.4]
at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:522) ~[storm-core-0.9.4.jar:0.9.4]
at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.4.jar:0.9.4]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:94) ~[storm-kafka-0.9.4.jar:0.9.4]
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:65) ~[storm-kafka-0.9.4.jar:0.9.4]
... 7 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/RandomQuery/partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) ~[curator-client-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) ~[curator-framework-2.5.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) ~[curator-framework-2.5.0.jar:na]
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:91) ~[storm-kafka-0.9.4.jar:0.9.4]
... 8 common frames omitted
 7257 [SyncThread:0] INFO  org.apache.storm.zookeeper.server.ZooKeeperServer - Established session 0x14dadefc40c0013 with negotiated timeout 20000 for client /127.0.0.1:51265
7257 [Thread-17-QueryCounter-EventThread] INFO  org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
7268 [SyncThread:0] INFO  org.apache.storm.zookeeper.server.ZooKeeperServer - Established session 0x14dadefc40c0014 with negotiated timeout 20000 for client /127.0.0.1:51267
7268 [Thread-4-SendThread(localhost:2006)] INFO  org.apache.storm.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2006, sessionid = 0x14dadefc40c0014, negotiated timeout = 20000
7269 [Thread-4-EventThread] INFO  org.apache.storm.curator.framework.state.ConnectionStateManager - State change: CONNECTED
    7306 [Thread-4] INFO  backtype.storm.daemon.worker - Reading Assignments.
7400 [Thread-17-QueryCounter] ERROR backtype.storm.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325) [storm-core-0.9.4.jar:0.9.4]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker$fn__4693$fn__4694.invoke(worker.clj:491) [storm-core-0.9.4.jar:0.9.4]
at backtype.storm.daemon.executor$mk_executor_data$fn__3272$fn__3273.invoke(executor.clj:240) [storm-core-0.9.4.jar:0.9.4]
at backtype.storm.util$async_loop$fn__460.invoke(util.clj:473) [storm-core-0.9.4.jar:0.9.4]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]

我的拓扑:

public class TopologyQueryCounterMain {


static final Logger logger = Logger.getLogger(TopologyQueryCounterMain.class);


private static final String SPOUT_ID = "QueryCounter";


public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException {

    int numSpoutExecutors = 1;
    logger.debug("This is SpoutConfig");
    KafkaSpout kspout = QueryCounter();
    TopologyBuilder builder = new TopologyBuilder();
    logger.debug("This is Set Spout");
    builder.setSpout(SPOUT_ID, kspout, numSpoutExecutors);
    logger.debug("This is Set bolt");
    builder.setBolt("word-normalizer", new WordNormalizer())
        .shuffleGrouping(SPOUT_ID);
    builder.setBolt("word-counter", new WordCounter(),1)
        .fieldsGrouping("word-normalizer", new Fields("sentence"));


    Config conf = new Config();
    LocalCluster cluster = new LocalCluster();
    logger.debug("This is Submit cluster");
    conf.put(Config.NIMBUS_HOST, "192.168.1.229");
    conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
     System.setProperty("storm.jar", "/home/ubuntu/workspace/QueryCounter/target/QueryCounter-0.0.1-SNAPSHOT.jar");
    conf.setNumWorkers(20);
    conf.setMaxSpoutPending(5000);

    if (args != null && args.length > 0) {
        StormSubmitter. submitTopology(args[0], conf, builder.createTopology());
    }

    else
    {   
        cluster.submitTopology("QueryCounter", conf, builder.createTopology());
        Utils.sleep(10000);
        cluster.killTopology("QueryCounter");
        logger.debug("This is ShutDown cluster");
        //cluster.shutdown();
    }
}


private static KafkaSpout QueryCounter() {
    String zkHostPort = "localhost:2181";
    String topic = "RandomQuery";

    String zkRoot = "/QueryCounter";
    String zkSpoutId = "QueryCounter-spout";
    ZkHosts zkHosts = new ZkHosts(zkHostPort);

    logger.debug("This is Inside kafka spout cluster");
    SpoutConfig spoutCfg = new SpoutConfig(zkHosts, topic, zkRoot, zkSpoutId);
    spoutCfg.scheme=new SchemeAsMultiScheme(new StringScheme());
    KafkaSpout kafkaSpout = new KafkaSpout(spoutCfg);
    return kafkaSpout;
  }

}

WordNormalizer Bolt

public class WordNormalizer extends BaseBasicBolt {
static final Logger logger = Logger.getLogger(WordNormalizer.class);
public void cleanup() {}

/**
 * The bolt will receive the line from the
 * words file and process it to Normalize this line
 * 
 * The normalize will be put the words in lower case
 * and split the line to get all words in this 
 */
public void execute(Tuple input, BasicOutputCollector collector) {
    String sentence = input.getString(0);
    logger.debug("This is Word_normalizer Funtion");

        sentence = sentence.trim();
        System.out.println("In Normalizer : "+sentence);
        if(!sentence.isEmpty()){
            sentence = sentence.toLowerCase();
            collector.emit(new Values(sentence));
            logger.debug("This is Word_normalizer Emitting Value1");
        }
    }



/**
 * The bolt will only emit the field "word" 
 */
public void declareOutputFields(OutputFieldsDeclarer declarer) {
    declarer.declare(new Fields("sentence"));
    logger.debug("This is Word_normalizer Emitting Value2");
}
}

WordCounter

public class WordCounter extends BaseBasicBolt {
static final Logger logger = Logger.getLogger(WordCounter.class);
Integer id;
String name;
Map<String, Integer> counters;

/**
 * At the end of the spout (when the cluster is shutdown
 * We will show the word counters
 */
@Override
public void cleanup() {
    System.out.println("-- Word Counter ["+name+"-"+id+"] --");
    for(Map.Entry<String, Integer> entry : counters.entrySet()){
        System.out.println(entry.getKey()+": "+entry.getValue());
        logger.debug("This is Word_counter cleanup Funtion");
    }
}

/**
 * On create 
 */
@Override
public void prepare(Map stormConf, TopologyContext context) {
    this.counters = new HashMap<String, Integer>();
    this.name = context.getThisComponentId();
    this.id = context.getThisTaskId();
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {}


@Override
public void execute(Tuple input, BasicOutputCollector collector) {
    String str = input.getString(0);
    /**
     * If the word dosn't exist in the map we will create
     * this, if not We will add 1 
     */logger.debug("This is Word_counter execute Funtion");
    if(!counters.containsKey(str)){
        counters.put(str, 1);
    }else{
        Integer c = counters.get(str) + 1;
        counters.put(str, c);
        System.out.println("In Counter:" + c);
    }

}
}

我需要做哪些修改?

0 个答案:

没有答案