我可以使用本地群集运行风暴Kafka,但无法使用暴风雨运行下面的提交者是我的拓扑代码
任何人都可以帮我解决这个问题:)
package com.org.kafka;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.generated.AlreadyAliveException;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.generated.InvalidTopologyException;
import org.apache.storm.kafka.KafkaSpout;
import org.apache.storm.kafka.SpoutConfig;
import org.apache.storm.kafka.StringScheme;
import org.apache.storm.kafka.ZkHosts;
import org.apache.storm.spout.SchemeAsMultiScheme;
import org.apache.storm.topology.TopologyBuilder;
import kafka.api.OffsetRequest;
public class KafkaTopology {
public static void main(String[] args)
throws AlreadyAliveException, InvalidTopologyException,
AuthorizationException {
ZkHosts zkHosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, "secondTest", "", "id7");
kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
kafkaConfig.startOffsetTime = OffsetRequest.EarliestTime();
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("KafkaSpout", new KafkaSpout(kafkaConfig), 1);
builder.setBolt("Sentence-bolt", new SentenceBolt(), 1).globalGrouping("KafkaSpout");
builder.setBolt("PrinterBolt", new PrinterBolt(), 1).globalGrouping("SentenceBolt");
LocalCluster cluster = new LocalCluster();
Config conf = new Config();
StormSubmitter.submitTopology("KafkaStormToplogy", conf, builder.createTopology());
try {
System.out.println("Waiting to consume from kafka");
Thread.sleep(10000);
}
catch (Exception exception) {
System.out.println("Thread interrupted exception : " + exception);
}
cluster.killTopology("KafkaToplogy");
cluster.shutdown();
}
}
我在worker.log文件中找到了以下异常。
但是当我查看终端时,它显示完成提交拓扑:KafkaStormToplogy
2018-01-24 11:58:38.941 o.a.s.d.worker main [ERROR] Error on initialization of server mk-worker
java.lang.RuntimeException: java.io.InvalidClassException: org.apache.storm.kafka.SpoutConfig; local class incompatible: stream classdesc serialVersionUID = -1247769246497567352, local class serialVersionUID = 6814635004761021338
at org.apache.storm.utils.Utils.javaDeserialize(Utils.java:254) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.utils.Utils.getSetComponentObject(Utils.java:504) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.daemon.task$get_task_object.invoke(task.clj:74) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.daemon.task$mk_task_data$fn__4609.invoke(task.clj:177) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.util$assoc_apply_self.invoke(util.clj:931) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.daemon.task$mk_task_data.invoke(task.clj:170) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.daemon.task$mk_task.invoke(task.clj:181) ~[storm-core-1.0.5.jar:1.0.5]
at org.apache.storm.daemon.executor$mk_executor$fn__4830.invoke(executor.clj:371) ~[storm-core-1.0.5.jar:1.0.5]
at clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?]
at clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?]
at clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$fn__6506.invoke(protocols.clj:101) ~[clojure-1.7.0.jar:?]
答案 0 :(得分:1)
我认为这可能是因为你的Nimbus类路径上有不同版本的storm-kafka与你的工作者类路径,或者因为你在不同的JDK上运行Nimbus和worker。 SpoutConfig(https://github.com/apache/storm/blob/1.x-branch/external/storm-kafka/src/jvm/org/apache/storm/kafka/SpoutConfig.java)应该声明一个serialVersionUID,但事实并非如此。请参阅参考https://stackoverflow.com/a/285809/8845188。据我了解,serialVersionUID是由JVM在运行时生成的,不同的JDK可能会为同一个类生成不同的数字。
我会克隆storm-kafka并将缺少的serialVersionUID字段添加到SpoutConfig,构建storm-kafka并再试一次。我已经提出https://issues.apache.org/jira/browse/STORM-2911来跟踪修复它。欢迎你来看看它。