我的问题: 当我运行
kubectl -n test scale --replicas=5 -f web-api-deployment.yaml
Kubernetes集群: 3位大师 5个工作节点
AWS: 弹性负载平衡器将端口443指向每个Kubernetes工作者节点
POD部署:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: test
name: WEB-API
spec:
replicas: 1
template:
metadata:
labels:
app: WEB-API
spec:
containers:
- name: WEB-API
image: WEB-API:latest
env:
- name: NGINX_WORKER_PROCESSES
value: "1"
- name: KEEPALIVETIMEOUT
value: "0"
- name: NGINX_WORKER_CONNECTIONS
value: "2048"
resources:
requests:
cpu: 500m
memory: 500Mi
ports:
- containerPort: 443
volumeMounts:
- name: config-volume
mountPath: /opt/config/
- name: aws-volume
mountPath: /root/.aws
apiVersion: v1
kind: Service
metadata:
namespace: prd
name: WEB-API
annotations:
external-dns.alpha.kubernetes.io/hostname: someaddress
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:xxxxxxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
labels:
app: WEB-API
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
targetPort: 80
protocol: TCP
selector:
app: WEB-API
sessionAffinity: None
type: LoadBalancer
答案 0 :(得分:0)
没有理由不将其扩展到每个节点一个以上(除非集群中有如此多的节点,否则它将尝试最佳地分配工作负载,这意味着在您的情况下,每个节点1个pod集群中有5个副本和5个节点)。您的Pod是否处于“待处理”状态?如果是这样,请检查他们的describe
,以获取有关未计划他们原因的信息。您还可以封锁/排水节点,以查看可用于调度的节点较少时5个吊舱的表现。
443绑定位于Pods网络命名空间中,因此您可以根据需要在任意多个Pod上同时监听自己的443,不会发生端口冲突,因为每个Pod都有各自的本地主机和Pod IP。