我想(暂时)使用本地主机绑定目录来保持SonarQube的应用程序状态。下面,我描述了如何在自托管的Kubernetes(1.11.3)集群中实现这一目标。
我遇到的问题是,尽管一切正常,Kubernetes仍不使用主机路径来持久化数据(/opt/sonarqube/postgresql
)。在docker inspect
个SonarQube容器上,它使用下面的绑定。
如何使用主机安装路径进行安装?
"Binds": [
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/0:/opt/sonarqube/conf",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~configmap/startup:/tmp-script/:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/2:/opt/sonarqube/data",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/3:/opt/sonarqube/extensions",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~secret/default-token-zrjdj:/var/run/secrets/kubernetes.io/serviceaccount:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/etc-hosts:/etc/hosts",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/containers/sonarqube/95053a5c:/dev/termination-log"
]
这是我设置应用程序所做的
我创建了一个StorageClass
来创建安装本地路径的PV:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage-nowait
provisioner: kubernetes.io/no-provisioner
然后我创建了两个PV,用于SonarQube helm chart,如下所示:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv-postgresql
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /opt/sonarqube/postgresql
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- myhost
我使用此附加配置启动了SonarQube舵图,以使用我刚创建的PV
image:
tag: 7.1
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
答案 0 :(得分:2)
如果您看到文档here
- HostPath(仅用于单节点测试–不以任何方式支持本地存储,并且在多节点群集中将不起作用)
因此,这可能就是为什么您在另一个地方看到它的原因。我自己尝试过,我的PVC仍处于待处理状态。因此,您可以像这样使用local
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
labels:
vol=myvolume
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
然后,您必须创建相应的PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
storageClassName: local-storage
selector:
matchLabels:
vol: "myvolume"
然后在pod规范中:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: myclaim
如果您不关心着陆在任何节点上并且每个节点中都有不同的数据,也可以在pod规范中直接使用hostPath
:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate