当heketi端点与pv和PVC不在同一个命名空间中时,glusterfs如何创建卷

时间:2020-03-08 08:26:47

标签: kubernetes namespaces kubernetes-pod glusterfs

我有两个名称空间“ runsdata”和“ monitoring”。 heketi pod和glusterfs的daemonSet pod都在“ runsdata”命名空间下。现在,我想在“监视”名称空间下创建Prometheus监视程序。由于我需要存储才能存储我的Prometheus数据。因此,我创建了PVC(在“监视” ns下)和pv,然后在PVC yaml中声明storageclass创建相应的卷,以便为Prometheus提供存储。但是当我创建与pv绑定的pvc并应用prometheus-server.yaml时。我收到错误消息:

  Warning  FailedMount       18m (x3 over 43m)     kubelet, 172.16.5.151  Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-rules-volume prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume]: timed out waiting for the condition
  Warning  FailedMount       13m (x5 over 50m)     kubelet, 172.16.5.151  Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume prometheus-rules-volume]: timed out waiting for the condition
  Warning  FailedMount       3m58s (x35 over 59m)  kubelet, 172.16.5.151  MountVolume.NewMounter initialization failed for volume "data-prometheus-pv" : endpoints "heketi-storage-endpoints" not found

从上面的日志中不难知道,storageClass找不到用于创建卷的heketi端点。因为heketi端点在“ runsdata”下。我该如何解决这个问题?

其他信息: 1. pv和pvc

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-prometheus-pv
  labels:
    pv: data-prometheus-pv
    release: stable
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: runsdata-static-class
  glusterfs:
    endpoints: "heketi-storage-endpoints"
    path: "runsdata-glusterfs-static-class"
    readOnly: true

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-prometheus-claim
  namespace: monitoring
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: runsdata-static-class
  selector:
    matchLabels:
      pv: data-prometheus-pv
      release: stable

[root@localhost online-prometheus]# kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS            REASON   AGE
    data-config-pv                             1Gi        RWX            Retain           Bound    runsdata/data-config-claim         runsdata-static-class            5d22h
    data-mongo-pv                              1Gi        RWX            Retain           Bound    runsdata/data-mongo-claim          runsdata-static-class            4d4h
    data-prometheus-pv                         2Gi        RWX            Recycle          Bound    monitoring/data-prometheus-claim   runsdata-static-class            151m
    data-static-pv                             1Gi        RWX            Retain           Bound    runsdata/data-static-claim         runsdata-static-class            7d15h
    pvc-02f5ce74-db7c-40ba-b0e1-ac3bf3ba1b37   3Gi        RWX            Delete           Bound    runsdata/data-test-claim           runsdata-static-class            3d5h
    pvc-085ec0f1-6429-4612-9f71-309b94a94463   1Gi        RWX            Delete           Bound    runsdata/data-file-claim           runsdata-static-class            3d17h
    [root@localhost online-prometheus]# kubectl get pvc -n monitoring
    NAME                    STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS            AGE
    data-prometheus-claim   Bound    data-prometheus-pv   2Gi        RWX            runsdata-static-class   151m
    [root@localhost online-prometheus]#
  1. heketi和glusterfs
[root@localhost online-prometheus]# kubectl get pods -n runsdata|egrep "heketi|gluster"
glusterfs-5btbl                               1/1     Running   1          11d
glusterfs-7gmbh                               1/1     Running   3          11d
glusterfs-rmx7k                               1/1     Running   7          11d
heketi-78ccdb6fd-97tkv                        1/1     Running   2          10d
[root@localhost online-prometheus]#
  1. storageClass定义
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: runsdata-static-class
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
reclaimPolicy: Delete
parameters:
  resturl: "http://10.10.11.181:8080"
  volumetype: "replicate:3"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "runsdata-gf-admin"
  #secretNamespace: "runsdata"
  #secretName: "heketi-secret"

1 个答案:

答案 0 :(得分:1)

解决方案是在当前名称空间下创建端点和服务。然后,我们可以在pv yaml中使用该服务,如下所示: enter image description here

[root@localhost gluster]# cat glusterfs-endpoints.yaml 
---
kind: Endpoints
apiVersion: v1
metadata:
  name: glusterfs-cluster
  namespace: monitoring
subsets:
- addresses:
  - ip: 172.16.5.150
  - ip: 172.16.5.151
  - ip: 172.16.5.152
  ports:
  - port: 1
    protocol: TCP
[root@localhost gluster]# cat glusterfs-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster
  namespace: monitoring
spec:
  ports:
    - port: 1
[root@localhost gluster]#