如果您使用区域性群集和永久性磁盘,则引用该磁盘的Pod不会自动调度到与磁盘相同的区域

时间:2019-06-20 07:52:02

标签: google-cloud-platform google-kubernetes-engine

根据https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters#pd “一旦配置了永久性磁盘,任何引用该磁盘的Pod都将被调度到与磁盘相同的区域。” 但我测试过,事实并非如此。

创建磁盘的过程:

gcloud compute disks create mongodb --size=1GB --zone=asia-east1-c
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/ornate-ensign-234106/zones/asia-east1-c/disks/mongodb].
NAME     ZONE          SIZE_GB  TYPE         STATUS
mongodb  asia-east1-c  1        pd-standard  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

集群条件:

Name    Zone    Recommendation  In use by   Internal IP External IP Connect 
gke-kubia-default-pool-08dd2133-qbz6    asia-east1-a        k8s-ig--c4addd497b1e0a6d, gke-kubia-default-pool-08dd2133-grp   10.140.0.17 (nic0)  
35.201.224.238


gke-kubia-default-pool-183639fa-18vr    asia-east1-c        gke-kubia-default-pool-183639fa-grp, k8s-ig--c4addd497b1e0a6d   10.140.0.18 (nic0)  
35.229.152.12


gke-kubia-default-pool-42725220-43q8    asia-east1-b        gke-kubia-default-pool-42725220-grp, k8s-ig--c4addd497b1e0a6d   10.140.0.16 (nic0)  
34.80.225.6

用于创建广告连播的Yaml:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb
spec:
  volumes:
  - name: mongodb-data
    gcePersistentDisk:
      pdName: mongodb
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

该广告连播预计会安排在位于Asia-east1-c区域的gke-kubia-default-pool-183639fa-18vr上。但是:

C:\kube>kubectl get pod -o wide
NAME          READY   STATUS              RESTARTS   AGE    IP           NODE                                   NOMINATED NODE
fortune       2/2     Running             0          4h9m   10.56.3.5    gke-kubia-default-pool-42725220-43q8   <none>
kubia-4jmzg   1/1     Running             0          9d     10.56.1.6    gke-kubia-default-pool-183639fa-18vr   <none>
kubia-j2lnr   1/1     Running             0          9d     10.56.3.4    gke-kubia-default-pool-42725220-43q8   <none>
kubia-lrt9x   1/1     Running             0          9d     10.56.0.14   gke-kubia-default-pool-08dd2133-qbz6   <none>
mongodb       0/1     ContainerCreating   0          55s    <none>       gke-kubia-default-pool-42725220-43q8   <none>

C:\kube>kubectl describe pod mongodb
Name:               mongodb
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-kubia-default-pool-42725220-43q8/10.140.0.16
Start Time:         Thu, 20 Jun 2019 15:39:13 +0800
Labels:             <none>
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mongodb
Status:             Pending
IP:
Containers:
  mongodb:
    Container ID:
    Image:          mongo
    Image ID:
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /data/db from mongodb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sd57s (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mongodb-data:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     mongodb
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  default-token-sd57s:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sd57s
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason              Age                   From                                           Message
  ----     ------              ----                  ----                                           -------
  Normal   Scheduled           10m                   default-scheduler                              Successfully assigned default/mongodb to gke-kubia-default-pool-42725220-43q8
  Warning  FailedMount         106s (x4 over 8m36s)  kubelet, gke-kubia-default-pool-42725220-43q8  Unable to mount volumes for pod "mongodb_default(7fe9c096-932e-11e9-bb3d-42010a8c00de)": timeout expired waiting for volumes to attach or mount for pod "default"/"mongodb". list of unmounted volumes=[mongodb-data]. list of unattached volumes=[mongodb-data default-token-sd57s]
  Warning  FailedAttachVolume  9s (x13 over 10m)     attachdetach-controller                        AttachVolume.Attach failed for volume "mongodb-data" : GCE persistent disk not found: diskName="mongodb" zone="asia-east1-b"

C:\kube>

有人知道为什么吗?

1 个答案:

答案 0 :(得分:1)

这里的问题是,pod试图在一个节点上配置为asia-east1-b,并且由于该磁盘是在asia-east1-c中配置的,因此未装入磁盘。

您可以在这里使用nodeSelector,它将在您的节点上添加标签,然后在Yaml中为容器指定该标签。这样,它将选择asia-east1-c中的节点并安装磁盘。