k8s中的mongo设置不使用持久卷

时间:2019-07-02 15:00:28

标签: mongodb docker kubernetes minikube

我正在尝试在minikube集群中将本地文件夹安装为mongo的/data/db。到目前为止还没有运气:(

因此,我遵循了these的步骤。它们描述了如何创建持久卷,持久卷声明,服务和Pod。

配置文件很有意义,但是当我最终旋转Pod时,它会由于错误而重新启动,然后继续运行。来自窗格(kubectl log mongo-0)的日志是

2019-07-02T13:51:49.177+0000 I CONTROL  [main] note: noprealloc may hurt performance in many applications
2019-07-02T13:51:49.180+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-0
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] db version v4.0.10
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] git version: c389e7f69f637f7a1ac3cc9fae843b635f20b766
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] modules: none
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] build environment:
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0" }, storage: { mmapv1: { preallocDataFiles: false, smallFiles: true } } }
2019-07-02T13:51:49.186+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-07-02T13:51:49.186+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=483M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-07-02T13:51:51.913+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:913047][1:0x7ffa7b8fca80], txn-recover: Main recovery loop: starting at 3/1920 to 4/256
2019-07-02T13:51:51.914+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:914009][1:0x7ffa7b8fca80], txn-recover: Recovering log 3 through 4
2019-07-02T13:51:51.948+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:948068][1:0x7ffa7b8fca80], txn-recover: Recovering log 4 through 4
2019-07-02T13:51:51.976+0000 I STORAGE  [initandlisten] WiredTiger message [1562075511:976820][1:0x7ffa7b8fca80], txn-recover: Set global recovery timestamp: 0
2019-07-02T13:51:51.979+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-07-02T13:51:51.986+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] 
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-02T13:51:51.986+0000 I CONTROL  [initandlisten] 
2019-07-02T13:51:52.003+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-07-02T13:51:52.005+0000 I NETWORK  [initandlisten] waiting for connections on port 27017

如果我连接到MongoDB / pod,则mongo运行正常! 但是,它没有使用持久卷。这是我的pv.yaml:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mongo-pv
   labels:
     type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/k8s/mongo"

在mongo窗格中可以看到/data/db中的mongo文件,但是在我的本地计算机(/k8s/mongo)上该文件夹是空的。

下面我还将列出持久性卷声明(pvc)和pod / service yaml

pvc.yaml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

mongo.yaml:

apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  clusterIP: None
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
        - name: mongo-pv-storage
          persistentVolumeClaim:
            claimName: mongo-pv-claim
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-pv-storage
              mountPath: /data/db

我也尝试过,而不是使用persistentVolumeClaim

volumes:
  - name: mongo-pv-storage
    hostPath:
      path: /k8s/mongo

除了创建过程中没有错误外,其他问题相同。

任何建议可能是什么问题,或者在下一步可以找到更多详细信息?

此外,PV和PVC如何连接?

3 个答案:

答案 0 :(得分:2)

请尝试

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
  app: mongodb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-volume
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

您可以创建全新的PVC并在此处使用或更改名称。这对我来说是有效的,在传递命令时我也遇到配置mongoDB的相同问题。删除命令并尝试。

有关更多详细信息,请检查this github

答案 1 :(得分:1)

一些建议(可能/可能没有帮助)

将您的存储类名称更改为String:

storageClassname: "manual"

这很奇怪,但是对我有用,请确保您的路径/ k8s / mongo具有正确的权限。 chmod 777 /k8s/mongo

答案 2 :(得分:0)

我可以确认它在k8s docker-for-desktop环境中可以正常工作。因此,问题与minikube有关。我已经使用hyperkitvritualbox驱动程序测试了minikube。在这两种情况下,写入/ data / db的文件在本地文件夹(/ k8s / mongo)中均不可见