K8的Pod无法调度:x个节点存在卷节点亲缘性冲突

时间:2018-11-13 19:33:39

标签: kubernetes docker-volume kubernetes-pvc

这个问题类似于Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict。但是,我想为我的特定情况添加更多颜色。

我正在尝试使用mongodb helm chart

我已经创建了一个Persistent Volume,用于由吊舱/图表创建的PV Claim

> kubectl describe pv/mongo-store-01
Name:              mongo-store-01
Labels:            <none>
Annotations:       field.cattle.io/creatorId=user-crk5v
                   pv.kubernetes.io/bound-by-controller=yes
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:
Status:            Bound
Claim:             mongodb/mongodb-mongodb
Reclaim Policy:    Retain
Access Modes:      RWO
Capacity:          20Gi
Node Affinity:
  Required Terms:
    Term 0:        hostname in [myhostname]
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /k8s/volumes/mongo-store-01
    HostPathType:  DirectoryOrCreate
Events:            <none>

部署图表时,mongo PV Claim似乎已正确绑定。

> kubectl -n mongodb describe pvc
Name:          mongodb-mongodb
Namespace:     mongodb
StorageClass:
Status:        Bound
Volume:        mongo-store-01
Labels:        io.cattle.field/appId=mongodb
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mongodb-mongodb","namespace":"mongodb"},"spec":{"accessModes":["...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWO
Events:        <none>

但是,广告连播未能调度,引用了volume node affinity conflict。我不确定是什么原因造成的。

> kubectl -n mongodb describe pod
Name:           mongodb-mongodb-7b797bb485-b985x
Namespace:      mongodb
Node:           <none>
Labels:         app=mongodb
                pod-template-hash=3635366041
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/mongodb-mongodb-7b797bb485
Containers:
  mongodb-mongodb:
    Image:      mongo:3.6.5
    Port:       27017/TCP
    Host Port:  0/TCP
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_EXTRA_FLAGS:
    Mounts:
      /etc/mongo/mongod.conf from config (rw)
      /var/lib/mongo from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lsnv7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-mongodb
    Optional:  false
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-mongodb
    ReadOnly:   false
  default-token-lsnv7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lsnv7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  33s (x3596 over 25m)  default-scheduler  0/24 nodes are available: 21 node(s) had volume node affinity conflict, 3 node(s) had taints that the pod didn't tolerate.

尽管volume node affinity conflict被适当地绑定到提供的pvc,为什么调度程序仍由于pv而失败?

0 个答案:

没有答案