我们有一个产品的多节点设置,需要在其中部署多个Elasticsearch Pod。由于所有这些都是数据节点,并且具有用于永久存储的卷挂载,因此我们不想在同一节点上放置两个Pod。我正在尝试使用Kubernetes的反亲和功能,但无济于事。
集群部署是通过Rancher完成的。集群中有5个节点,并且三个节点(假设node-1
,node-2
and node-3
)的标签为test.service.es-master: "true"
。因此,当我部署Helm Chart并将其放大到3时,Elasticsearch Pod已在所有这三个节点上启动并运行。但如果将其缩放到4,则第4个数据节点将进入上述节点之一。这是正确的行为吗?我的理解是,施加严格的反亲和性应防止Pod出现在同一节点上。我提到过多个博客和论坛(例如this和this),它们建议的变化与我的相似。我附上掌舵表的相关部分。
要求是,我们只需要在如上所述带有特定键值对标记的那些节点上启动ES,并且这些节点中的每个节点应只包含一个Pod。感谢您提供任何反馈意见。
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
clusterIP: None
ports:
...
selector:
test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
selector:
matchLabels:
test.service.es-master: "true"
serviceName: {{ .Values.service.name }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
replicas: {{ .Values.replicaCount }}
template:
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
securityContext:
...
volumes:
...
...
status: {}
Update-1
根据评论和答案中的建议,我在template.spec中添加了反亲和力部分。但不幸的是,问题仍然存在。更新后的Yaml如下所示:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
clusterIP: None
ports:
- name: {{ .Values.service.httpport | quote }}
port: {{ .Values.service.httpport }}
targetPort: {{ .Values.service.httpport }}
- name: {{ .Values.service.tcpport | quote }}
port: {{ .Values.service.tcpport }}
targetPort: {{ .Values.service.tcpport }}
selector:
test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
selector:
matchLabels:
test.service.es-master: "true"
serviceName: {{ .Values.service.name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
securityContext:
readOnlyRootFilesystem: false
volumes:
- name: elasticsearch-data-volume
hostPath:
path: /opt/ca/elasticsearch/data
initContainers:
- name: elasticsearch-data-volume
image: busybox
securityContext:
privileged: true
command: ["sh", "-c", "chown -R 1010:1010 /var/data/elasticsearch/nodes"]
volumeMounts:
- name: elasticsearch-data-volume
mountPath: /var/data/elasticsearch/nodes
containers:
- env:
{{- range $key, $val := .Values.data }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end}}
image: {{ .Values.image.registry }}/analytics/{{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Values.service.name }}
ports:
- containerPort: {{ .Values.service.httpport }}
- containerPort: {{ .Values.service.tcpport }}
volumeMounts:
- name: elasticsearch-data-volume
mountPath: /var/data/elasticsearch/nodes
resources:
limits:
memory: {{ .Values.resources.limits.memory }}
requests:
memory: {{ .Values.resources.requests.memory }}
restartPolicy: Always
status: {}
答案 0 :(得分:3)
按照Egor的建议,您需要podAntiAffinity:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
因此,使用您当前的标签,它可能看起来像这样:
spec:
affinity:
nodeAffinity:
# node affinity stuff here
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "test.service.es-master"
operator: In
values:
- "true"
topologyKey: "kubernetes.io/hostname"
请确保将其放在Yaml中的正确位置,否则将无法正常工作。
答案 1 :(得分:1)
这适用于Kubernetes 1.11.5:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
test.service.es-master: "true"
template:
metadata:
labels:
test.service.es-master: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
containers:
- image: nginx:1.7.10
name: nginx
我不知道为什么您为pod部署选择器标签选择了与节点选择器相同的键/值。至少让他们感到困惑...
答案 2 :(得分:1)
首先,无论是在您的初始清单还是更新的清单中,您都将topologyKey
用于nodeAffinity
,这会在尝试使用kubectl create
或{部署这些清单时给您一个错误。 {1}},因为对于kubectl apply
参考doc
topologyKey
的api密钥
第二,您正在为节点使用名为nodeAffinity
的密钥。您确定“节点”具有这些标签吗?请通过此命令test.service.es-master
最后,扩大对@Laszlo的答案和您对@bitswazsky的评论以简化它,您可以使用以下代码:
在这里,我使用了名为kubectl get nodes --show-labels
的节点标签(作为键)来标识该节点,您可以通过执行此命令role
kubectl label nodes <node-name> role=platform