您好我已经配置了一个带有两个节点的集群(两个虚拟机进入虚拟机),集群启动正确但广告标志似乎被领事忽略
docker-compose vm1(app)
version: '2'
services:
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
networks:
net-app:
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- appconsul
networks:
net-app:
networks:
net-app:
driver: bridge
ipam:
config:
- subnet: 172.32.0.0/24
docker-compose vm2(web)
version: '2'
services:
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
networks:
net-web:
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- webconsul
networks:
net-web:
networks:
net-web:
driver: bridge
ipam:
config:
- subnet: 172.33.0.0/24
开始后我没有关于广告标志的错误,但服务已经注册了内部网络的私有IP,而不是广告中声明的IP(192.168.20.10和192.168.20.11),任何想法?
附加node_1的日志,但它们与node_2
相同appconsul_1 | ==> WARNING: Expect Mode enabled, expecting 2 servers
appconsul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
appconsul_1 | ==> Starting raft data migration...
appconsul_1 | ==> Starting Consul agent...
appconsul_1 | ==> Starting Consul agent RPC...
appconsul_1 | ==> Consul agent running!
appconsul_1 | Node name: 'node_1'
appconsul_1 | Datacenter: 'dc1'
appconsul_1 | Server: true (bootstrap: false)
appconsul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
appconsul_1 | Cluster Addr: 192.168.20.10 (LAN: 8301, WAN: 8302)
appconsul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
appconsul_1 | Atlas: <disabled>
appconsul_1 |
appconsul_1 | ==> Log data will now stream in as it occurs:
appconsul_1 |
appconsul_1 | 2017/06/13 14:57:24 [INFO] raft: Node at 192.168.20.10:8300 [Follower] entering Follower state
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1.dc1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1.dc1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync changes: No cluster leader
appconsul_1 | 2017/06/13 14:57:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
appconsul_1 | 2017/06/13 14:57:48 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:13 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:22 [INFO] serf: EventMemberJoin: node_2 192.168.20.11
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: adding server node_2 (Addr: 192.168.20.11:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: Attempting bootstrap with nodes: [192.168.20.10:8300 192.168.20.11:8300]
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Heartbeat timeout reached, starting election
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Candidate] entering Candidate state
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Remote peer 192.168.20.11:8300 does not have local node 192.168.20.10:8300 as a peer
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Election won. Tally: 2
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Leader] entering Leader state
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: cluster leadership acquired
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: New leader elected: node_1
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: pipelining replication to peer 192.168.20.11:8300
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_1' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_2' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_solr_1:8983'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8500'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8300'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'consul'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_mysql_1:3306'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8400'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:53:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301:udp'
感谢您的回复
更新
我曾尝试从撰写文件中删除networks
部分,但遇到同样的问题,我使用compose v1解决了,此配置有效:
撰写vm1(app)
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- appconsul
撰写vm2(网页)
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- webconsul
答案 0 :(得分:-1)
问题是撰写文件的版本,v2和v3有同样的问题,只能用于撰写文件v1