我在CDH 5.11群集上手动安装了Apache Storm 1.1.0。此群集使用Kerberos进行保护。 我编写了风暴样本,从kafka主题中提取数据并实时插入HDFS目录。因此,此示例使用storm-kafka以及storm-hdfs。 当我运行风暴拓扑时,它在kafka-spout中给出以下错误。
==> 2017-06-18 22:29:31.297 o.a.z.ClientCnxn Thread-14-kafka-spout-executor[5 5]-SendThread(localhost:2181) [INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
==> 2017-06-18 22:29:31.571 k.c.SimpleConsumer Thread-14-kafka-spout-executor[5 5] [INFO] Reconnect due to error:
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[stormjar.jar:?]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85) [stormjar.jar:?]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) [stormjar.jar:?]
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) [stormjar.jar:?]
at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) [stormjar.jar:?]
at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) [stormjar.jar:?]
at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) [stormjar.jar:?]
at org.apache.storm.kafka.PartitionManager.<init>(PartitionManager.java:94) [stormjar.jar:?]
at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) [stormjar.jar:?]
at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) [stormjar.jar:?]
at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) [stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__4976$fn__4991$fn__5022.invoke(executor.clj:644) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) [storm-core-1.1.0.jar:1.1.0]
Kafka版本:2.1.1-1.2.1.1.p0.18
在&#34; / usr / local / storm&#34;中没有风暴kafka * .jat。 但是,即使在这种情况下,这个示例在对集群进行核心处理之前也能正常工作。
我在Hortonworks上尝试了相同的示例,在添加以下代码来设置安全协议后,拓扑运行良好:
spoutConfig.securityProtocol = "SASL_PLAINTEXT";
在Cloudera的情况下添加上面的代码后,它会出错:&#34;找不到符号&#34;
如果您需要任何其他信息,请与我们联系......
提前致谢..