我正在使用Apache Storm 1.1.2,Apache Kafka 0.10,Zookeeper和Docker Compose编写一个dockerized Java 9 Spring应用程序。
我的拓扑完全在dockerized服务内部的本地集群上工作,但是现在我将其移至生产集群中了。
我为Storm集群创建提交拓扑的服务似乎运行良好,并且代码在PostConstruct中看起来像这样
{{url('pages/"$pageName"')}} instead of {{rout($pageName)}}
我的docker compose文件如下所示。
KafkaSpoutConfig<String,String> spoutConf =
KafkaSpoutConfig.builder("kafka:9092", "topic")
.setProp(ConsumerConfig.GROUP_ID_CONFIG, "my-group-id")
.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, MyDeserializer.class)
.build();
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("kafkaSpoutId", new KafkaSpout<String,String>(spoutConf));
builder.setBolt("boltId", new MyBolt()).shuffleGrouping("kafkaSpoutId");
Config conf = new Config();
conf.setNumWorkers(2);
conf.put(Config.STORM_ZOOKEEPER_SERVERS, List.of("zookeeper"));
conf.put(Config.STORM_ZOOKEEPER_PORT, 2181);
conf.put(Config.NIMBUS_SEEDS, List.of("nimbus"));
conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
System.setProperty("storm.jar","/opt/app.jar");
StormSubmitter.submitTopology("topology-id", conf, builder.createTopology());
所有容器都打开了。在用户界面中,我可以看到在post构造中创建的拓扑,但是没有处理Kafka消息,应该使用本地Kafka生产者生产聚合的螺栓没有发布。
在version: "2.1"
services:
my-service:
image: my-service
mem_limit: 4G
memswap_limit: 4G
networks:
- default
environment:
- SPRING_PROFILES_ACTIVE=local
nimbus:
image: storm:1.1.2
container_name: nimbus
command: >
storm nimbus
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c nimbus.seeds="[\"nimbus\"]"
networks:
- default
ports:
- 6627:6627
supervisor:
image: storm:1.1.2
container_name: supervisor
command: >
storm supervisor
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c nimbus.seeds="[\"nimbus\"]"
networks:
- default
depends_on:
- nimbus
links:
- nimbus
restart: always
ports:
- 6700
- 6701
- 6702
- 6703
ui:
image: storm:1.1.2
command: storm ui -c nimbus.seeds="[\"nimbus\"]"
networks:
- default
ports:
- 8081:8080
networks:
default:
external:
name: myNetwork
的超级用户容器中,我看到两个重复的异常。
第一个(我认为更重要)是/logs/worker-artifact/topology-id****/6700/worker.log
第二个例外是ClassNotFoundException: org.apache.storm.kafka.spout.KafkaSpout
UPDATE
不幸的是,我无法发布整个pom,但这是我的Storm依赖项
org.apache.storm.shade.org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:6700
这是我的spring-boot-maven-plugin。我虽然添加了配置以使复制到我的容器的jar无法执行,但是可以解决这个问题。当我检查容器中的jar时,它看起来包括依赖项的行包括jar,但同时也带有大量乱码
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>${storm.version}</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>${storm.version}</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</dependency>
这是我大部分的dockerfile
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<executable>false</executable>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
</plugins>
</build>
答案 0 :(得分:0)
我知道了。问题是我正在形成一个不可执行的罐子,而不是一个真正的胖罐子。我在pom中添加了以下标签
class Product(models.Model)
name = ...
class Part(models.Model)
product = models.ForeignKey(Product, on_delete= models.CASCADE,default=1)
接下来,我将System属性更改为
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
并将以下行添加到我的Dockerfile
System.setProperty("storm.jar", "/opt/storm.jar");
最后,我首先运行COPY ${project.build.finalName}-jar-with-dependencies.jar /opt/storm.jar
,然后将具有依赖项的项目jar从target /复制到目标/ docker-ready,就像 from project dir mvn comiple assembly:single
,最后运行{{ 1}}