我无法理解部署的行为。在我的情况下,它总是使用错误版本的ReplicaSets。首先,我做了
kubectl create -f [文件名]
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: kube-state-metrics
name: kube-state-metrics
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: kube-state-metrics
strategy:
type: Recreate
template:
metadata:
labels:
app: kube-state-metrics
但是pod不能启动,因为主节点有污点。我更改了部署文件并添加了容忍度:
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Equal
effect: NoSchedule
kubectl replace -f [文件名]
我有两个修订版,但NewReplicaSet分别设置为旧修订版,而OldReplicaSet设置为修改版本。嗯...
我删除了部署,并再次调用“创建”。情况没有改善。
OldReplicaSets: <none>
NewReplicaSet: kube-state-metrics-59b7dccd55 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set kube-state-metrics-69c88bb67b to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set kube-state-metrics-69c88bb67b to 0
Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set kube-state-metrics-59b7dccd55 to 1
OldReplicaSets为空,但实际上已使用。再一次,NewReplicaSets是错误的。此外,该部署有两个修订版本:
REVISION CHANGE-CAUSE
1 kubectl create --filename=manifests
2 kubectl create --filename=manifests
如何解决?