Kubernetes-如何将Pod分配给具有特定标签的节点

时间:2020-07-16 14:39:20

标签: kubernetes kubernetes-pod

假设我有以下带有标签env=stagingenv=production的节点

server0201     Ready    worker   79d   v1.18.2   10.2.2.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0202     Ready    worker   79d   v1.18.2   10.2.2.23     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0203     Ready    worker   35d   v1.18.3   10.2.2.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0301     Ready    worker   35d   v1.18.3   10.2.3.21     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0302     Ready    worker   35d   v1.18.3   10.2.3.29     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0303     Ready    worker   35d   v1.18.0   10.2.3.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0304     Ready    worker   65d   v1.18.2   10.2.6.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production

我尝试使用nodeSelectornodeAffinity,但是当我选择选择器标签时,无论我创建了多少个副本,我所有的pod都始终驻留在server0203而不是server0303上是env=staging

如果我使用env=production,则仅会登陆server0201。

我该怎么做才能确保将吊舱均匀分配到分配有这些标签的节点上?

这是我的部署规范

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 2 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: env
                operator: Equals
                values:
                - staging
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

工作节点上没有污点

kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
null
null
null
null
null
null
null

显示所有标签

server0201     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0202,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0202     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0203,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0203     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0210,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0301     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0301,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0302     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0309,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0303     Ready    worker   35d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0310,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0304     Ready    worker   65d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0602,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker

1 个答案:

答案 0 :(得分:1)

玩了一圈之后,我意识到NodeSelectorPodAffinity没问题。实际上,通过使用限制在我的命名空间内的node-selector注释,我什至可以实现我的问题打算达到的目标。

apiVersion: v1
kind: Namespace
metadata:
  name: gab
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: env=production
spec: {}
status: {}    

只要我的部署在名称空间中,节点选择器就可以工作。

kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 10 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

现在为什么一开始它对我有用,原因是staging标记节点的第二个节点的利用率比我一直居住的第二个节点高。

Resource           Requests     Limits
  --------           --------     ------
  cpu                3370m (14%)  8600m (35%)
  memory             5350Mi (4%)  8600Mi (6%)
  ephemeral-storage  0 (0%)       0 (0%)

我一直登陆的那个节点是

  Resource           Requests    Limits
  --------           --------    ------
  cpu                1170m (4%)  500100m (2083%)
  memory             164Mi (0%)  100Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)

当我测试并切换到production时,因为有更多的节点,所以它分配给几个节点。

因此,我认为,调度程序基于Server load来平衡广告连播(我可能是错的),而不是尝试均匀分发