使用Heketi在Kubernetes集群上安装GlusterFS时出现问题

时间:2019-10-22 07:18:19

标签: kubernetes glusterfs

我尝试使用heketi在我的kubernetes集群上安装GlusterFS。我开始进行gk-deploy,但它表明未找到豆荚:

Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
 GlusterFS pods ... not found.
 deploy-heketi pod ... not found.
 heketi pod ... not found.
 gluster-s3 pod ... not found.
Creating initial resources ... Error from server (AlreadyExists): error when creating "/heketi/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml": serviceaccounts "heketi-service-account" already exists
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "heketi-sa-view" already exists
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view not labeled
OK
node/sapdh2wrk1 not labeled
node/sapdh2wrk2 not labeled
node/sapdh2wrk3 not labeled
daemonset.extensions/glusterfs created
Waiting for GlusterFS pods to start ... pods not found.

我已经多次进行了gk-deploy。

我的kubernetes集群中有3个节点,似乎pods无法在其中的任何一个上启动,但是我不明白为什么。 容器已创建但尚未准备就绪:

kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
glusterfs-65mc7           0/1     Running             0          16m
glusterfs-gnxms           0/1     Running             0          16m
glusterfs-htkmh           0/1     Running             0          16m
heketi-754dfc7cdf-zwpwn   0/1     ContainerCreating   0          74m

这是一个GlusterFS Pod的日志,它以警告结尾:

Events:
  Type     Reason     Age                 From                 Message
  Normal   Scheduled  19m                 default-scheduler    Successfully assigned default/glusterfs-65mc7 to sapdh2wrk1
  Normal   Pulled     19m                 kubelet, sapdh2wrk1  Container image "gluster/gluster-centos:latest" already present on machine
  Normal   Created    19m                 kubelet, sapdh2wrk1  Created container
  Normal   Started    19m                 kubelet, sapdh2wrk1  Started container
  Warning  Unhealthy  13m (x12 over 18m)  kubelet, sapdh2wrk1  Liveness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service
  Warning  Unhealthy  3m58s (x35 over 18m)  kubelet, sapdh2wrk1  Readiness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service

Glusterfs-5.8-100.1已安装并在包括主节点在内的每个节点上启动。 Pod无法启动的原因是什么?

0 个答案:

没有答案