我有一个与kubernetes集成的REST应用程序,用于测试REST查询。现在,当我在客户端执行POST查询时,自动创建的作业状态将无限期保持为PENDING
。同样会自动创建的POD也会发生
当我深入查看仪表板中的事件时,它会附加该卷,但无法挂载该卷并给出此错误:
Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11-27d51822d85b)": timeout expired waiting for volumes to attach or mount for pod "default"/"ingestion-88dhg". list of unmounted volumes=[cdiworkspace-volume]. list of unattached volumes=[cdiworkspace-volume default-token-qz2nb]
我已使用以下代码手动定义了持久卷和持久卷声明,但未连接到任何吊舱。我应该那样做吗?
PV
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"selfLink": "/api/v1/persistentvolumes/cdiworkspace",
"uid": "92252f76-fe51-4225-9b63-4d6228d9e5ea",
"resourceVersion": "100026",
"creationTimestamp": "2019-07-10T09:49:04Z",
"annotations": {
"pv.kubernetes.io/bound-by-controller": "yes"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"fc": {
"targetWWNs": [
"50060e801049cfd1"
],
"lun": 0
},
"accessModes": [
"ReadWriteOnce"
],
"claimRef": {
"kind": "PersistentVolumeClaim",
"namespace": "default",
"name": "cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"apiVersion": "v1",
"resourceVersion": "98688"
},
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound"
}
}
PVC
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "cdiworkspace",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/cdiworkspace",
"uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
"resourceVersion": "100028",
"creationTimestamp": "2019-07-10T09:32:16Z",
"annotations": {
"pv.kubernetes.io/bind-completed": "yes",
"pv.kubernetes.io/bound-by-controller": "yes",
"volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
},
"finalizers": [
"kubernetes.io/pvc-protection"
]
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
},
"volumeName": "cdiworkspace",
"storageClassName": "standard",
"volumeMode": "Block"
},
"status": {
"phase": "Bound",
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "10Gi"
}
}
}
journalctl -xe _SYSTEMD_UNIT=kubelet.service
的结果
Jul 01 09:47:26 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:26.979098 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:40 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:40.979722 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:55.978806 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:08.979375 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:23 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:23.979463 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:37 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:37.979005 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:48 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:48.977686 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:02 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:02.979125 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:17 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:17.979408 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:28 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:28.977499 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:41 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:41.977771 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:53 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:53.978605 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:05 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:05.980251 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:16 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:16.979292 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:31 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:31.978346 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:42 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:42.979302 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:55.978043 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:08.977540 22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190929 22759 remote_image.go:113] PullImage "friendly/myplanet:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = E
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190971 22759 kuberuntime_image.go:51] Pull image "friendly/myplanet:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.191024 22759 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon:
部署Yaml
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "back",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/back",
"uid": "9f21717c-2c04-459f-b47a-95fd8e11728d",
"resourceVersion": "298987",
"generation": 1,
"creationTimestamp": "2019-07-16T13:16:15Z",
"labels": {
"run": "back"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "back"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "back"
}
},
"spec": {
"containers": [
{
"name": "back",
"image": "back:latest",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Never"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"unavailableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2019-07-16T13:16:34Z",
"lastTransitionTime": "2019-07-16T13:16:15Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"back-7fd9995747\" has successfully progressed."
},
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2019-07-19T08:32:49Z",
"lastTransitionTime": "2019-07-19T08:32:49Z",
"reason": "MinimumReplicasUnavailable",
"message": "Deployment does not have minimum availability."
}
]
}
}
如何将音量正确且自动地安装到吊舱上? 我不想为每个REST服务手动创建Pod并为其分配卷
答案 0 :(得分:1)
我再次浏览了pod的日志,并意识到未提供 python file1 所需的参数,并导致容器崩溃。我通过提供日志中指出的所有缺少的参数,并在deployment.yaml
中为现在看起来像这样的Pod提供了这些参数来进行测试:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
creationTimestamp:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
imagePullPolicy: Never
env:
- name: scihub_username
value: test
- name: scihub_password
value: test
- name: CDINRW_BASE_URL
value: 10.1.40.11:8081/swagger-ui.html
- name: CDINRW_JOB_ID
value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
这开始下载数据并暂时解决了问题,但是这不是我希望它运行的方式,因为我希望通过提供所有参数并启动和停止该容器的REST API来触发它。我将为此创建一个单独的问题,并将其链接到下方,以便任何人关注。
答案 1 :(得分:0)
我已经定义了持久卷和持久卷声明 手动使用以下代码,但未连接到任何吊舱。应该 我这样做吗?
因此,到目前为止,您还没有在Pod
定义中引用过它,对吧?至少我在您的Deployment
中看不到它。如果是这样,答案是:是的,您必须这样做,以便群集中的Pod可以使用它。
让我们从头开始。基本上,配置Pod
(也适用于Deployment定义中的Pod模板)以使用PersistentVolume
进行存储的整个过程包括3个步骤[source]:
集群管理员创建由物理存储支持的
PersistentVolume
。管理员未关联该卷 与任何Pod。集群用户创建一个
PersistentVolumeClaim
,它会自动绑定到合适的PersistentVolume
。用户创建一个
Pod
作为存储的Deployment
(也可以是PersistentVolumeClaim
,您可以在其中定义某个Pod模板规范)。
在这里详细描述上述所有步骤是没有意义的,因为它已经做得很好here。
您可以使用以下命令验证PV / PVC的可用性:
kubectl get pv volume-name
在此阶段应将您的卷状态显示为Bound
与kubectl get pvc task-pv-claim
相同(在您的情况下为kubectl get pvc cdiworkspace
,但是我建议为PersistentVolumeClaim
使用不同的名称,例如cdiworkspace-claim,以便可以与PersistentVolume
轻松区分开本身)-此命令还应显示Bound
请注意,Pod的配置文件仅指定PersistentVolumeClaim
,但未指定PersistentVolume
本身。从Pod的角度来看,索赔是一笔交易。这是一个很好的描述,清楚地标记了这两个对象之间的区别 [source]:
PersistentVolume(PV)是群集中具有以下内容的一块存储 由管理员提供或使用 存储类。它是集群中的资源,就像节点是 群集资源。 PV是类似于Volumes的体积插件,但是具有 生命周期独立于使用PV的任何单个吊舱。这个API 对象捕获存储实现的详细信息, NFS,iSCSI或特定于云提供商的存储系统。
PersistentVolumeClaim(PVC)是用户存储请求。它 类似于豆荚。 Pod消耗节点资源,PVC消耗PV 资源。 Pod可以请求特定级别的资源(CPU和 记忆)。声明可以要求特定的大小和访问方式(例如 一次读/写或多次只读都可以安装。
下面的Pod
/ Deployment
定义中的规范示例引用了现有的PersistentVolumeClaim
:
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
关于您的问题:
如何将音量正确且自动地安装到吊舱上?一世 不想为每个REST服务手动创建Pod,并且 给他们分配卷
您不必手动创建它们。您可以在PersistentVolumeClaim
定义中指定{@ {1}}在Pod模板规范中使用。
文档资源:
有关如何配置Pod以使用Deployment
进行存储的详细分步说明,可以找到here。
在此article中可以找到有关Kubernetes中持久卷概念的更多信息。