我正在学习Event Sourcing and CQRS
,并在YouTube上找到了不错的视频系列。该系列有一个Github中提供的代码存储库。它使用3个模块(barista
,orders
和beans
)在分布式环境中相互交谈,以管理来自客户的咖啡订单。运行指令如下所示,
启动Apache Kafka代理,例如使用Docker撰写:https://github.com/wurstmeister/kafka-docker。 将
KAFKA_ADVERTISED_HOST_NAME
配置为您的相应IP 地址。使用
kafka.properties
配置每个bootstrap.servers=<your-IP>:9092
文件。构建并运行各个实例。在
orders/
,beans/
和barista/
目录中的每个目录上,执行build-run-local.sh
。 这将构建Gradle项目,构建Docker映像并启动一个 给定服务的新实例。
我无缝地遵循了这些步骤。构建并运行各个实例后,执行命令
$ curl http://localhost:8002/beans/resources/beans -i
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json
Content-Length: 2
Date: Tue, 08 Jan 2019 13:16:12 GMT
然后,我尝试使用命令
发布Bean。$ curl http://localhost:8002/beans/resources/beans -i -XPOST \
-H 'content-type: application/json' \
-d '{"beanOrigin": "Colombia", "amount": 10}'
这时,终端被挂起并且没有响应。我研究了各个模块,并看到了Dockerfile
和build-run-local.sh
及其执行说明。例如,下面提供了Dockerfile
模块的beans
$ cat Dockerfile
FROM sdaschner/wildfly:javaee8-kafka-b1
COPY build/libs/beans.war $DEPLOYMENT_DIR
提供了beans/build-run-local.sh
,
$ cat build-run-local.sh
#!/bin/bash
cd ${0%/*}
set -eu
gradle build
docker build --rm -t scalable-coffee-shop-beans:1 .
docker run --rm --name beans -p 8002:8080 scalable-coffee-shop-beans:1
运行文件时,我得到输出(初始lnes),
$ ./build-run-local.sh
BUILD SUCCESSFUL in 0s
5 actionable tasks: 5 up-to-date
Sending build context to Docker daemon 310.3kB
Step 1/2 : FROM sdaschner/wildfly:javaee8-kafka-b1
---> 7a638cd4a3c8
Step 2/2 : COPY build/libs/beans.war $DEPLOYMENT_DIR
---> Using cache
---> ef4901bbea66
Successfully built ef4901bbea66
Successfully tagged scalable-coffee-shop-beans:1
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java
JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
我怀疑POST命令被挂起,因为我可能需要缺少的更多配置。例如,我没有配置前面提到的$DEPLOYMENT_DIR
,JBOSS_HOME
或JAVA_OPTS
中的任何一个。
我承认我在Docker
方面的经验有限(或很少)。但是,我发现这是执行Docker
文件信息的命令,
# get 2 instructions from the `Dockerfile` and excutes them.
$ docker build --rm -t scalable-coffee-shop-beans:1 .
Sending build context to Docker daemon 310.3kB
Step 1/2 : FROM sdaschner/wildfly:javaee8-kafka-b1
---> 7a638cd4a3c8
Step 2/2 : COPY build/libs/beans.war $DEPLOYMENT_DIR
---> Using cache
---> ef4901bbea66
Successfully built ef4901bbea66
Successfully tagged scalable-coffee-shop-beans:1
有人可以帮助我正确运行该应用程序吗?
更新:
我遵循了评论中的建议,并放置了日志消息。我得到的是:
03:26:12,271 INFO [com.sebastian_daschner.scalable_coffee_shop.beans.boundary.BeansResource] (default task-1) Bean origin = Colombia , Amount = 10
03:26:12,273 INFO [com.sebastian_daschner.scalable_coffee_shop.beans.boundary.BeanCommandService] (default task-1) Bean origin = Colombia , Amount = 1
所以看来com.sebastian_daschner.scalable_coffee_shop.beans.boundary.BeanCommandService
确实正确地获取了信息并按如下所示调用类com.sebastian_daschner.scalable_coffee_shop.events.control.EventProducer
的方法,
public void storeBeans(final String beanOrigin, final int amount) {
LOGGER.log(Level.INFO, "Bean origin = " + beanOrigin +
" " + ", Amount = " + amount);
eventProducer.publish(new BeansStored(beanOrigin, amount));
}
在EventProducer
内部,它调用下面提供的publish
方法,
public void publish(CoffeeEvent... events) {
try {
LOGGER.log(Level.INFO, "Events = " + Arrays.toString(events));
producer.beginTransaction();
send(events);
producer.commitTransaction();
} catch (ProducerFencedException e) {
LOGGER.log(Level.SEVERE, e.toString(), e);
producer.close();
} catch (KafkaException e) {
LOGGER.log(Level.SEVERE, e.toString(), e);
producer.abortTransaction();
}
}
目前,我没有从代码中获得任何日志
LOGGER.log(Level.INFO, "Events = " + Arrays.toString(events));
我认为它与Kafka有关,并且下面的代码无法执行:
producer.beginTransaction();
send(events);
producer.commitTransaction();
正如我确切指出的那样,现在有人可以帮助您找出问题所在吗?