使用本地持久存储在Kubernetes Minikube上运行MongoDB

时间:2017-03-30 16:28:41

标签: mongodb kubernetes gcp minikube

我目前正在尝试在Minikube上重现本教程:

http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html

我更新了配置文件,以便将主机路径用作minikube节点上的持久存储。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv0001
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: myclaim
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
  volumeClaimTemplates:
    - metadata:
        name: myclaim

以下结果如下:

kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM             REASON    AGE
pv0001                                     1Gi        RWO           Retain          Available                               17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3   1Gi        RWO           Delete          Bound       default/myclaim             11s

kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
myclaim   Bound     pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3   1Gi        RWO           14s

kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes   10.0.0.1     <none>        443/TCP     3d
mongo        None         <none>        27017/TCP   53s

kubectl get pod
No resources found.


kubectl describe service mongo
Name:           mongo
Namespace:      default
Labels:         name=mongo
Selector:       role=mongo
Type:           ClusterIP
IP:         None
Port:           <unset> 27017/TCP
Endpoints:      <none>
Session Affinity:   None
No events.


kubectl get statefulsets
NAME      DESIRED   CURRENT   AGE
mongo     3         0         4h


kubectl describe statefulsets mongo
Name:           mongo
Namespace:      default
Image(s):       mongo,cvallance/mongo-k8s-sidecar
Selector:       environment=test,role=mongo
Labels:         environment=test,role=mongo
Replicas:       0 current / 3 desired
Annotations:        <none>
CreationTimestamp:  Thu, 30 Mar 2017 18:23:56 +0200
Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen LastSeen    Count   From        SubObjectPath   Type    Reason      Message
  --------- --------    -----   ----        -------------   --------------      -------
  1s        1s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
  1s        1s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
  1s        0s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]


kubectl get ev | grep mongo
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]

kubectl describe pvc myclaim
Name:       myclaim
Namespace:  default
StorageClass:   standard
Status:     Bound
Volume:     pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels:     <none>
Capacity:   1Gi
Access Modes:   RWO
No events.

minikube version: v0.17.1

似乎该服务无法加载pod,这使得使用kubectl日志进行调试变得很复杂。 我在节点上创建持久卷的方式有问题吗?

非常感谢

1 个答案:

答案 0 :(得分:8)

TL; DR

在问题中描述的情况下,问题是StatefulSet的Pod根本没有启动,因此服务没有目标。没有启动的原因是:

  

WarningFailedCreate pvc:myclaim-mongo-0,错误:PersistentVolumeClaim“myclaim-mongo-0”无效:[spec.accessModes:必需值:至少需要1 访问模式,规格。资源[存储]:必需值]`

由于默认情况下音量定义为必需,因此如果没有它,Pod将无法启动。因此,编辑StatefulSet的volumeClaimTemplate使其具有:

volumeClaimTemplates:
- metadata:
    name: myclaim
  spec:
    accessModes: [ "ReadWriteOnce" ]
    resources:
      requests:
        storage: 1Gi

(无需手动创建PersistentVolumeClaim。)

更一般的解决方案

如果无法连接服务,请尝试以下命令:

kubectl describe service myservicename

如果你在输出中看到这样的东西:

Endpoints:      <none>

这意味着没有目标(通常是Pod)运行目标尚未就绪。要找出哪一个是这样的:

kubectl describe endpoint myservicename

它将列出所有端点,无论是否准备就绪。如果没有准备好,请调查Pod中的readinessProbe。如果不存在则尝试通过查看StatefulSet(Deployment,ReplicaSet,ReplicationController等)本身的消息(事件部分)来找出原因:

kubectl describe statefulset mystatefulsetname

如果你这样做,可以获得这些信息:

kubectl get ev | grep something

如果您确定它们正在运行并准备就绪,则Pod和服务上的标签不匹配。