Kubernetes存储在裸机/私有云上

时间:2015-04-25 11:48:53

标签: kubernetes storage persistent kubernetes-pod

我刚刚开始在2个私有云服务器上的2节点(master-minion)设置上使用Kubernetes。我已经安装了它,做了基本的配置,并让它运行从主人到仆从的一些简单的pod /服务。

我的问题是:

如果不使用Google Cloud,如何在pod中使用持久存储?

对于我的第一次测试,我运行了一个Ghost Blog pod,但如果我撕掉了pod,则更改将丢失。尝试向pod添加卷,但实际上无法找到有关如何在GC上执行操作的文档。

我的尝试:

apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
  manifest:
    version: v1beta1
    id: ghost
    containers:
      - name: ghost
        image: ghost
        volumeMounts:
          - name: ghost-persistent-storage
            mountPath: /var/lib/ghost
        ports:
          - hostPort: 8080
            containerPort: 2368
    volumes:
      - name: ghost-persistent-storage
        source:
          emptyDir: {}

找到了这个:Persistent Installation of MySQL and WordPress on Kubernetes

无法弄清楚如何在我的测试安装中添加存储(NFS?)。

3 个答案:

答案 0 :(得分:2)

在新的API(v1beta3)中,我们添加了更多卷类型,包括NFS volumes。 NFS卷类型假定您已经在某处运行NFS服务器以指向该pod。试一试,如果您有任何问题,请告诉我们!

答案 1 :(得分:1)

答案 2 :(得分:0)

您可以尝试https://github.com/suquant/glusterd解决方案。

kubernetes集群中的Glusterfs服务器

想法非常简单,集群管理器监听kubernetes api并添加到/ etc / hosts“metadata.name”和pod ip地址。

1。创建窗格

gluster1.yaml

apiVersion: v1
kind: Pod
metadata:
  name: gluster1
  namespace: mynamespace
  labels:
    component: glusterfs-storage
spec:
  nodeSelector:
    host: st01
  containers:
    - name: glusterfs-server
      image: suquant/glusterd:3.6.kube
      imagePullPolicy: Always
      command:
        - /kubernetes-glusterd
      args:
        - --namespace
        - mynamespace
        - --labels
        - component=glusterfs-storage
      ports:
        - containerPort: 24007
        - containerPort: 24008
        - containerPort: 49152
        - containerPort: 38465
        - containerPort: 38466
        - containerPort: 38467
        - containerPort: 2049
        - containerPort: 111
        - containerPort: 111
          protocol: UDP
      volumeMounts:
        - name: brick
          mountPath: /mnt/brick
        - name: fuse
          mountPath: /dev/fuse
        - name: data
          mountPath: /var/lib/glusterd
      securityContext:
        capabilities:
          add:
            - SYS_ADMIN
            - MKNOD
  volumes:
    - name: brick
      hostPath:
        path: /opt/var/lib/brick1
    - name: fuse
      hostPath:
        path: /dev/fuse
    - name: data
      emptyDir: {}

gluster2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: gluster2
  namespace: mynamespace
  labels:
    component: glusterfs-storage
spec:
  nodeSelector:
    host: st02
  containers:
    - name: glusterfs-server
      image: suquant/glusterd:3.6.kube
      imagePullPolicy: Always
      command:
        - /kubernetes-glusterd
      args:
        - --namespace
        - mynamespace
        - --labels
        - component=glusterfs-storage
      ports:
        - containerPort: 24007
        - containerPort: 24008
        - containerPort: 49152
        - containerPort: 38465
        - containerPort: 38466
        - containerPort: 38467
        - containerPort: 2049
        - containerPort: 111
        - containerPort: 111
          protocol: UDP
      volumeMounts:
        - name: brick
          mountPath: /mnt/brick
        - name: fuse
          mountPath: /dev/fuse
        - name: data
          mountPath: /var/lib/glusterd
      securityContext:
        capabilities:
          add:
            - SYS_ADMIN
            - MKNOD
  volumes:
    - name: brick
      hostPath:
        path: /opt/var/lib/brick1
    - name: fuse
      hostPath:
        path: /dev/fuse
    - name: data
      emptyDir: {}

3。运行pods

kubectl create -f gluster1.yaml
kubectl create -f gluster2.yaml

2。管理glusterfs服务器

kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer probe gluster2"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer status"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume create media replica 2 transport tcp,rdma gluster1:/mnt/brick gluster2:/mnt/brick force"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume start media"

3。使用

gluster-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: glusterfs-storage
  namespace: mynamespace
spec:
  ports:
    - name: glusterfs-api
      port: 24007
      targetPort: 24007
    - name: glusterfs-infiniband
      port: 24008
      targetPort: 24008
    - name: glusterfs-brick0
      port: 49152
      targetPort: 49152
    - name: glusterfs-nfs-0
      port: 38465
      targetPort: 38465
    - name: glusterfs-nfs-1
      port: 38466
      targetPort: 38466
    - name: glusterfs-nfs-2
      port: 38467
      targetPort: 38467
    - name: nfs-rpc
      port: 111
      targetPort: 111
    - name: nfs-rpc-udp
      port: 111
      targetPort: 111
      protocol: UDP
    - name: nfs-portmap
      port: 2049
      targetPort: 2049
  selector:
    component: glusterfs-storage

运行服务

kubectl create -f gluster-svc.yaml

按主机名“glusterfs-storage.mynamespace”挂载群集中的NFS后

相关问题