Kubernetes无法为超时挂载容器的卷

时间:2016-08-17 12:59:45

标签: linux amazon-web-services kubernetes

我正在尝试将NFS卷挂载到我的pod,但没有成功。

当我尝试从其他正在运行的服务器连接到它时,我有一台运行nfs挂载点的服务器

sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt工作正常

另一件值得一提的事情是当我从部署中移除卷并且pod正在运行时。我登录到它,我可以telnet到10.0.0.4端口111和2049成功。所以似乎没有任何沟通问题

以及:

showmount -e 10.0.0.4
Export list for 10.0.0.4:
/export/drive 10.0.0.0/16
/export       10.0.0.0/16

所以我可以假设服务器和客户端之间没有网络或配置问题(我正在使用亚马逊和我测试的服务器与k8s小兵在同一个安全组中)

P.S: 服务器是一个简单的ubuntu-> 50gb磁盘

Kubernetes v1.3.4

所以我开始创建我的PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.0.0.4
    path: "/export"

我的PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi

这是kubectl如何描述它们:

  Name:       nfs
    Labels:     <none>
    Status:     Bound
    Claim:      default/nfs-claim
    Reclaim Policy: Retain
    Access Modes:   RWX
    Capacity:   50Gi
    Message:
    Source:
        Type:   NFS (an NFS mount that lasts the lifetime of a pod)
        Server: 10.0.0.4
        Path:   /export
        ReadOnly:   false
    No events.

  Name:       nfs-claim
    Namespace:  default
    Status:     Bound
    Volume:     nfs
    Labels:     <none>
    Capacity:   0
    Access Modes:
    No events.

pod部署:

  apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: mypod
      labels:
        name: mypod
    spec:
      replicas: 1
      strategy:
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
        type: RollingUpdate
      template:
        metadata:
          name: mypod
          labels:
            # Important: these labels need to match the selector above, the api server enforces this constraint
            name: mypod
        spec:
          containers:
          - name: abcd
            image: irrelevant to the question
            ports:
            - containerPort: 80
            env:
            - name: hello
              value: world
            volumeMounts:
            - mountPath: "/mnt"
              name: nfs
          volumes:
            - name: nfs
              persistentVolumeClaim:
                claimName: nfs-claim

当我部署我的POD时,我得到以下内容:

Volumes:
      nfs:
        Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  nfs-claim
        ReadOnly:   false
      default-token-6pd57:
        Type:   Secret (a volume populated by a Secret)
        SecretName: default-token-6pd57
    QoS Tier:   BestEffort
    Events:
      FirstSeen LastSeen    Count   From                            SubobjectPath   Type        Reason      Message
      --------- --------    -----   ----                            -------------   --------    ------      -------
      13m       13m     1   {default-scheduler }                            Normal      Scheduled   Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal
      11m       7s      6   {kubelet ip-10-0-0-157.us-west-2.compute.internal}          Warning     FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]
      11m       7s      6   {kubelet ip-10-0-0-157.us-west-2.compute.internal}          Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]

尝试了我所知道的一切,以及我能想到的一切。我在这里错过了什么或做错了什么?

1 个答案:

答案 0 :(得分:1)

我测试了版本1.3.4和1.3.5的Kubernetes和NFS mount对我不起作用。后来我切换到1.2.5,那个版本给了我一些更详细的信息(kubectl describe pod ...)。事实证明,超级图像中缺少'nfs-common'。在基于主节点和工作节点上的hyperkube图像向所有容器实例添加nfs-common后,NFS共享开始正常工作(mount成功)。这就是这种情况。我在实践中测试了它,它解决了我的问题。

相关问题