错误ICP 3.1.1 Grafana Prometheus Kubernetes状态窗格始终为'Init'

时间:2018-12-11 11:11:51

标签: kubernetes grafana prometheus ibm-cloud-private prometheus-alertmanager

我已经完成了使用VA安装ICP的工作。内部使用GlusterFS,使用1个主控,1个代理,1个管理,1个VA和3个Worker。


此列表Kubernetes Pod 正在运行

Checking Using 'kubectl' get pods

存储-ICP上的PersistentVolume GlusterFS

enter image description here

这描述了Kubernetes Pod错误信息包


custom-metrics-adapter

Events:
      Type    Reason     Age   From                     Message
      ----    ------     ----  ----                     -------
      Normal  Scheduled  17m   default-scheduler        Successfully assigned kube-system/custom-metrics-adapter-5d5b694df7-cggz8 to 192.168.10.126
      Normal  Pulled     17m   kubelet, 192.168.10.126  Container image "swgcluster.icp:8500/ibmcom/curl:4.0.0" already present on machine
      Normal  Created    17m   kubelet, 192.168.10.126  Created container
      Normal  Started    17m   kubelet, 192.168.10.126  Started container

monitoring-grafana

Events:
      Type     Reason       Age   From                     Message
      ----     ------       ----  ----                     -------
      Normal   Scheduled    18m   default-scheduler        Successfully assigned kube-system/monitoring-grafana-799d7fcf97-sj64j to 192.168.10.126
      Warning  FailedMount  1m (x8 over 16m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251f69e3-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
    Mounting command: systemd-run
    Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99/monitoring-grafana-799d7fcf97-sj64j-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR 192.168.10.115:vol_946f98c8a92ce2930acd3181d803943c /var/lib/kubelet/pods/e2c85434-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251f69e3-fd60-11e8-9779-000c2914ff99
    Output: Running scope as unit run-r6ba2425d0e7f437d922dbe0830cd5a97.scope.
    mount: unknown filesystem type 'glusterfs'

     the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-grafana-799d7fcf97-sj64j
      Warning  FailedMount  50s (x8 over 16m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-grafana-799d7fcf97-sj64j_kube-system(e2c85434-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-grafana-799d7fcf97-sj64j". list of unmounted volumes=[grafana-storage]. list of unattached volumes=[grafana-storage config-volume dashboard-volume dashboard-config ds-job-config router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]

监控方法

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    19m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-85546d8575-jr89h to 192.168.10.126
  Warning  FailedMount  4m (x6 over 17m)    kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-85546d8575-jr89h_kube-system(e2ca91a8-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-85546d8575-jr89h". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume rules-volume etcd-certs storage-volume router-config monitoring-ca-certs monitoring-certs monitoring-client-certs router-entry lua-scripts-config-config default-token-f6d9q]
  Warning  FailedMount  55s (x11 over 17m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-252001ed-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-85546d8575-jr89h-glusterfs.log,backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119 192.168.10.115:vol_f101b55d8b1dc3021ec7689713a74e8c /var/lib/kubelet/pods/e2ca91a8-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-252001ed-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r638272b55bca4869b271e8e4b1ef45cf.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-85546d8575-jr89h

monitoring-prometheus-alertmanager

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    20m   default-scheduler        Successfully assigned kube-system/monitoring-prometheus-alertmanager-65445b66bd-6bfpn to 192.168.10.126
  Warning  FailedMount  1m (x9 over 18m)  kubelet, 192.168.10.126  (combined from similar events): MountVolume.SetUp failed for volume "pvc-251ed00f-fd60-11e8-9779-000c2914ff99" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99 --scope -- mount -t glusterfs -o backup-volfile-servers=192.168.10.115:192.168.10.116:192.168.10.119,auto_unmount,log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99/monitoring-prometheus-alertmanager-65445b66bd-6bfpn-glusterfs.log 192.168.10.115:vol_7766e36a77cbd2c0afe3bd18626bd2c4 /var/lib/kubelet/pods/e2cbe5e7-fd67-11e8-822b-000c2914ff99/volumes/kubernetes.io~glusterfs/pvc-251ed00f-fd60-11e8-9779-000c2914ff99
Output: Running scope as unit run-r35994e15064e48e2a36f69a88009aa5d.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod monitoring-prometheus-alertmanager-65445b66bd-6bfpn
  Warning  FailedMount  23s (x9 over 18m)  kubelet, 192.168.10.126  Unable to mount volumes for pod "monitoring-prometheus-alertmanager-65445b66bd-6bfpn_kube-system(e2cbe5e7-fd67-11e8-822b-000c2914ff99)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"monitoring-prometheus-alertmanager-65445b66bd-6bfpn". list of unmounted volumes=[storage-volume]. list of unattached volumes=[config-volume storage-volume router-config monitoring-ca-certs monitoring-certs router-entry default-token-f6d9q]

1 个答案:

答案 0 :(得分:0)

在重新安装ICP(IBM Cloud Private)之后,就解决了这个问题。

我检查了几项可能的错误,然后在少数未完全安装GlusterFS客户端的节点上找到了。

我正在检查命令“所有节点上的GlusterFS Client” :(在操作系统上使用Ubuntu)

sudo apt-get install glusterfs-client -y