我有一个Docker swarm culster,这是$ docker info
[dannil@ozcluster01 ozms]$ docker info
Containers: 15
Running: 10
Paused: 0
Stopped: 5
Images: 32
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
ozcluster01: 192.168.168.41:2375
└ ID: CKCO:JGAA:PIOM:F4PL:6TIH:EQFY:KZ6X:B64Q:HRFH:FSTT:MLJT:BJUY
└ Status: Healthy
└ Containers: 8 (6 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 192 MiB / 3.79 G
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.13.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T09:24:29Z
└ ServerVersion: 1.10.3
ozcluster02: 192.168.168.42:2375
└ ID: 73GR:6M7W:GMWD:D3DO:UASW:YHJ2:BTH6:DCO5:NJM6:SXPN:PXTY:3NHI
└ Status: Healthy
└ Containers: 7 (4 Running, 0 Paused, 3 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 192 MiB / 3.79 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.10.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T09:24:14Z
└ ServerVersion: 1.10.3
然后我执行docker-compose up -d
来运行我的docker容器
标签constraint:node==ozculster02
但是容器仍然在ozculster01上启动。
这是我的docker-compose.yml文件:
version: '2'
services:
rabbitmq:
image: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
config-service:
image: ozms/config-service
ports:
- "8888:8888"
volumes:
- ~/ozms/configs:/var/tmp/
- ~/ozms/log:/log
labels:
- "affinity:image==ozms/config-service"
eureka-service:
image: ozms/eureka-service
ports:
- "8761:8761"
volumes:
- ~/ozms/log:/log
labels:
- "constraint:node==ozcluster02"
environment:
- SPRING_RABBITMQ_HOST=rabbitmq
答案 0 :(得分:0)
在撰写第3版中,您应该将constraint
放在labels
部分中,将其放置在deploy
部分中
services:
...
eureka-service:
...
deploy:
placement:
constraints:
- node.hostname==ozcluster02