如何在裸机Kubernetes集群上运行Dgraph

时间:2020-05-19 14:17:29

标签: kubernetes yaml kubectl bare-metal dgraph

我正在尝试在HA Cluster中设置Dgraph,但是如果不存在volumes,它将无法部署。

在裸机群集上直接应用the provided config时不起作用。

$ kubectl get pod --namespace dgraph
dgraph-alpha-0                      0/1     Pending     0          112s
dgraph-ratel-7459974489-ggnql       1/1     Running     0          112s
dgraph-zero-0                       0/1     Pending     0          112s


$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims

还有其他人有这个问题吗?我已经经历了几天这个问题,找不到解决方法。 我如何让Dgraph使用群集的本地存储?

谢谢

2 个答案:

答案 0 :(得分:1)

自己找到了可行的解决方案。

我必须手动创建pvpvc,然后Dgraph才能在部署期间使用它们。

这是我用来创建所需的storageclasspvpvc的配置

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

部署Dgraph后,它会锁定在pvc

$ kubectl get pvc -n dgraph -o wide
NAME                            STATUS   VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
datadir-dgraph-dgraph-alpha-0   Bound    datadir-dgraph-dgraph-zero-2    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-1   Bound    datadir-dgraph-dgraph-alpha-0   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-2   Bound    datadir-dgraph-dgraph-zero-0    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-0    Bound    datadir-dgraph-dgraph-alpha-1   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-1    Bound    datadir-dgraph-dgraph-alpha-2   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-2    Bound    datadir-dgraph-dgraph-zero-1    8Gi        RWO            local          6h40m   Filesystem

答案 1 :(得分:0)

Dgraph的配置假定Kubernetes集群具有工作卷插件(供应商)。在托管的Kubernetes产品(aws,GKE,DO等)中,提供商已经采取了这一步骤。

我认为目标应该是与云提供商实现同等的功能,即调配必须是动态的(例如,OP自己的答案是正确的,但statically provisioned-k8s文档则相反)。

运行裸机时,必须先手动配置卷插件,然后才能dynamically provision volumes(k8s文档)使用StatefulSets,PersistentVolumeClaims等。值得庆幸的是,有many provisioners available(k8s文档)。 对于开箱即用支持动态预配置,列表中选中“内部预配器”的每个项目都可以。

因此,尽管问题有很多解决方案,但我最终还是使用了NFS。为了实现动态预配置,我必须使用外部预配置器。希望这就像安装Helm Chart一样简单。

  1. Install NFS(原始指南)在主节点上。
通过终端

ssh并运行

sudo apt update
sudo apt install nfs-kernel-server nfs-common
  1. 创建Kubernetes将要使用的目录并更改所有权
sudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes
  1. 配置NFS

打开文件/etc/exports

sudo nano /etc/exports

在底部添加以下行

/var/nfs/kubernetes  client_ip(rw,sync,no_subtree_check)
使用主节点ip

替换 client_ip。 在我的情况下,该IP是我的路由器租借给运行主节点(192.168.1.7)的计算机的DHCP服务器IP。

  1. 重新启动NFS以应用更改。
sudo systemctl restart nfs-kernel-server
  1. 在主服务器上设置NFS并假定存在Helm之后,安装预配器就像运行一样简单
helm install  nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner

替换 nfs.server标志,并带有主节点/ NFS服务器的相应IP /主机名。

注意标记storageClass.defaultClass必须为true,Kubernetes才能默认使用插件(provisioner)进行卷创建。

标记nfs.path与步骤2中创建的路径相同。

万一Helm抱怨can not find the chart运行helm repo add stable https://kubernetes-charts.storage.googleapis.com/

  1. 成功完成之前的步骤后,继续以described in their docs的形式安装Dgraph配置,并享受即用的Dgraph裸机动态配置的现成工作Dgraph部署。

单个服务器

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yaml

HA群集

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml