所以我刚刚开始研究风暴并试图理解它。我正在尝试连接到kafka主题,读取数据并将其写入HDFS螺栓。 起初我创建它没有shuffleGrouping(“stormspout”),我的Storm UI显示spout正在消耗主题中的数据,但没有任何内容被写入螺栓(除了它在HDFS上创建的空文件) 。然后我添加了shuffleGrouping(“stormspout”);现在螺栓看起来是错误的。如果有人可以帮忙解决这个问题,我将非常感激。
谢谢, 科尔曼
错误
2015-04-13 00:02:58 s.k.PartitionManager [INFO]从以下位置读取分区信息:/ storm / partition_0 - >空值 2015-04-13 00:02:58 s.k.PartitionManager [INFO]找不到分区信息,使用配置确定偏移量 2015-04-13 00:02:58 s.k.PartitionManager [INFO]来自zookeeper的最后提交偏移量:0 2015-04-13 00:02:58 s.k.PartitionManager [INFO]提交偏移0大于9223372036854775807,重置为startOffsetTime = -2 2015-04-13 00:02:58 s.k.PartitionManager [INFO]从偏移0开始Kafka 192.168.134.137:0 2015-04-13 00:02:58 s.k.ZkCoordinator [INFO]任务[1/1]完成令人耳目一新 2015-04-13 00:02:58 b.s.d.task [INFO] Emitting:stormspout default [colmanblah] 2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING元组任务:2 TUPLE:source:stormspout:3,stream:default,id:{462820364856350458 = 5573117062061876630},[colmanblah] 2015-04-13 00:02:58 b.s.d.task [INFO] Emitting:stormspout __ack_init [462820364856350458 5573117062061876630 3] 2015-04-13 00:02:58 b.s.d.executor [INFO] TRANSFERING元组任务:1 TUPLE:source:stormspout:3,stream:__ bag_init,id:{},[462820364856350458 5573117062061876630 3] 2015-04-13 00:02:58 b.s.d.executor [INFO]处理收到的消息FOR 1 TUPLE:source:stormspout:3,stream:__ bag_init,id:{},[462820364856350458 5573117062061876630 3] 2015-04-13 00:02:58 b.s.d.executor [INFO] BOLT ack TASK:1 TIME:TUPLE:source:stormspout:3,stream:__ bag_init,id:{},[462820364856350458 5573117062061876630 3] 2015-04-13 00:02:58 b.s.d.executor [INFO]执行完TUPLE源:stormspout:3,流:__ bag_init,id:{},[462820364856350458 5573117062061876630 3]任务:1 DELTA: 2015-04-13 00:02:59 b.s.d.executor [INFO]准备螺栓stormbolt:(2) 2015-04-13 00:02:59 b.s.d.executor [INFO]处理收到的消息FOR 2 TUPLE:source:stormspout:3,stream:default,id:{462820364856350458 = 5573117062061876630},[colmanblah]
2015-04-13 00:02:59 b.s.util [ERROR]异步循环死了!
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
... 6 common frames omitted
2015-04-08 04:26:39 b.s.d.executor [ERROR]
java.lang.RuntimeException: java.lang.NullPointerException
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.NullPointerException: null
at org.apache.storm.hdfs.bolt.HdfsBolt.execute(HdfsBolt.java:92) ~[storm-hdfs-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
代码:
TopologyBuilder builder = new TopologyBuilder();
Config config = new Config();
//config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 7000);
config.setNumWorkers(1);
config.setDebug(true);
//LocalCluster cluster = new LocalCluster();
//zookeeper
BrokerHosts brokerHosts = new ZkHosts("192.168.134.137:2181", "/brokers");
//spout
SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, "myTopic", "/kafkastorm", "KafkaSpout");
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
spoutConfig.forceFromStart = true;
builder.setSpout("stormspout", new KafkaSpout(spoutConfig),4);
//bolt
SyncPolicy syncPolicy = new CountSyncPolicy(10); //Synchronize data buffer with the filesystem every 10 tuples
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB); // Rotate data files when they reach five MB
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/stormstuff"); // Use default, Storm-generated file names
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),2
).shuffleGrouping("stormspout");
//cluster.submitTopology("ColmansStormTopology", config, builder.createTopology());
try {
StormSubmitter.submitTopologyWithProgressBar("ColmansStormTopology", config, builder.createTopology());
} catch (AlreadyAliveException e) {
e.printStackTrace();
} catch (InvalidTopologyException e) {
e.printStackTrace();
}
POM.XML依赖项
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
答案 0 :(得分:0)
首先尝试从execute方法中发出值,如果从不同的工作线程发出,则让所有工作线程在LinkedBlockingQueue中提供数据,只有一个工作线程允许从中发出值的LinkedBlockingQueue。
其次,尝试将Config.setMaxSpoutPending设置为某个值并再次尝试运行代码,并检查方案是否持续尝试减少该值。
参考 - Config.TOPOLOGY_MAX_SPOUT_PENDING:这设置一次喷出一个喷口任务可以挂起的喷口元组的最大数量(挂起意味着元组尚未被激活或失败)。强烈建议您将此配置设置为防止队列爆炸。
答案 1 :(得分:0)
我最终通过浏览风暴源代码来解决这个问题。
我没有设置
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter("|");
并包括它
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://192.168.134.137:8020")//54310
.withSyncPolicy(syncPolicy)
.withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),1
).shuffleGrouping("stormspout");
在HDFSBolt.Java类中,它尝试使用它,如果未设置则基本上会失效。这就是NPE的来源。
希望这有助于其他人,确保您已设置此课程所需的所有位。一个更有用的错误消息,例如&#34; RecordFormat未设置&#34;会很好......