希望您能帮助我解决这个问题。
我已经设置了CI / CD管道,该管道会在每次检入Spring boot RESTful服务应用程序时触发在Gitlab上在线执行的构建,打包和部署任务。这三个阶段和任务均成功运行,但是每当我通过以下方式测试该应用程序时,导航到浏览器中的负载平衡器URL,我发现一个负载平衡器目标返回(type = Not Found,status = 404)错误,而另一个目标返回预期的JSON响应。在将请求分配给目标时,负载平衡器默认为循环算法。
基础设施提供商是Digital Ocean。
我在做什么错了?
请在.gitlab-ci.yml文件下面找到
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
digitalocean-deploy:
image: cdrx/rancher-gitlab-deploy
stage: deploy
script:
- upgrade --environment Default --stack mta-hosting-optimizer --service web --new-image registry.gitlab.com/username/mta-hosting-optimizer
- upgrade --environment Default --stack mta-hosting-optimizer --service web2 --new-image registry.gitlab.com/username/mta-hosting-optimizer
docker-compose.yml
version: '2'
services:
web:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
mta-hosting-optimizer-lb:
image: rancher/lb-service-haproxy:v0.9.1
ports:
- 80:80/tcp
labels:
io.rancher.container.agent.role: environmentAdmin,agent
io.rancher.container.agent_service.drain_provider: 'true'
io.rancher.container.create_agent: 'true'
web2:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
rancher-compose.yml
version: '2'
services:
web:
scale: 1
start_on_create: true
mta-hosting-optimizer-lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- path: ''
priority: 1
protocol: http
service: web
source_port: 80
target_port: 8080
- priority: 2
protocol: http
service: web2
source_port: 80
target_port: 8080
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
initializing_timeout: 60000
interval: 2000
reinitializing_timeout: 60000
web2:
scale: 1
start_on_create: true
经过编辑以提供以下haproxy.cfg文件以响应@leodotcloud的 请求
global
chroot /var/lib/haproxy
daemon
group haproxy
maxconn 4096
maxpipes 1024
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA
ssl-default-bind-options no-sslv3 no-tlsv10 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m
user haproxy
defaults
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
maxconn 4096
mode tcp
option forwardfor
option http-server-close
option redispatch
retries 3
timeout client 50000
timeout connect 5000
timeout server 50000
resolvers rancher
nameserver dnsmasq xxx.xxx.xxx.xxx:53
listen default
bind *:42
frontend 80
bind *:80
mode http
default_backend 80_
backend 80_
acl forwarded_proto hdr_cnt(X-Forwarded-Proto) eq 0
acl forwarded_port hdr_cnt(X-Forwarded-Port) eq 0
http-request add-header X-Forwarded-Port %[dst_port] if forwarded_port
http-request add-header X-Forwarded-Proto https if { ssl_fc } forwarded_proto
mode http
server 89fbd2fd02e5b178c8c60ecf5ddc74yyyyyyyyyy xx.xx.166.99:8080
server 30c794d4a7524307ae3244a602caf1yyyyyyyyyy xx.xx.158.63:8080
答案 0 :(得分:0)
您的问题没有直接的答案,但是这里有一些调试您的问题的指针:
如果这些方法都无法帮助您解决问题,请向https://github.com/rancher/rancher提出问题。
答案 1 :(得分:0)
可能发生的情况是,第一个启动的Web容器获得了端口(8080),而第二个Web容器无法绑定到该端口。即使这两个容器运行在不同的节点上,由于Swarm通过服务网格路由请求的方式,它们也无法将端口分配给它们。
请参阅:https://docs.docker.com/engine/swarm/services/#publish-a-services-ports-directly-on-the-swarm-node
也是这个问题:https://github.com/moby/moby/issues/33160
要解决此问题,请将Web容器配置为在不同的端口上运行。或者,由于您似乎要运行同一容器的两个实例,因此请增加web
服务的副本数量。 Swarm将为您在可用节点之间分配容器。
HAProxy只需要对节点进行负载平衡,但是实际上,如果您命中了任何节点的IP地址,该请求可能会在Swarm中路由到另一个节点,因为整个机器集群都在Swarm自身中实现了负载平衡。这是两层负载平衡:在请求到达节点之一之后,在Swarm集群内以及在客户端和基础架构之间。