我正在AKS Elasticsearch集群中创建3个Master,2个数据,1个提取节点,并使用Elasticsearch版本7.9.1。
我已经成功创建了集群,但是在主选举过程中遇到了问题。
问题:如果我删除活动的主节点。然后,它将自动将专用数据节点选为活动主节点。甚至有时它会选择摄取节点作为活动主节点。
我希望为主动主节点选举选举专用主节点。
我认为这可能是“ discovery.seed_hosts”的原因。所以,我删除了
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
并添加
- name: discovery.seed_hosts
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
在这种情况下,它对于创建主节点非常有效。但是当我执行数据节点Yaml时会抛出错误:
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-10-08T18:49:54,640Z", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "docker-cluster", "node.name": "elasticsearch-data-0", "message": "failed to resolve host [elasticsearch-master-0]",
"stacktrace": ["java.net.UnknownHostException: elasticsearch-master-0",
"at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]",
"at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:855) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
所以,我对自己的配置表示怀疑。
elasticsearch-discovery
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-discovery
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: master
spec:
selector:
app: elasticsearch
role: master
ports:
- name: transport
port: 9300
protocol: TCP
主节点Yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-master
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: master
spec:
serviceName: elasticsearch-discovery
selector:
matchLabels:
app: elasticsearch
replicas: 3
template:
metadata:
labels:
app: elasticsearch
role: master
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-master-pod
image: XXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: network.host
value: "0.0.0.0"
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: cluster.initial_master_nodes
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "XXXXXXXXX"
- name: "NUMBER_OF_MASTERS"
value: "3"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-master
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
数据节点服务
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-data
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
clusterIP: None
selector:
app: elasticsearch
role: data
日期节点Yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-data
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
selector:
matchLabels:
app: elasticsearch
replicas: 2
template:
metadata:
labels:
app: elasticsearch
role: data
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-data-pod
image: XXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "docker-cluster"
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
摄取节点Yaml
kind: StatefulSet
metadata:
name: elasticsearch-ingest
namespace: poc-elasticsearch
labels:
app: elasticsearch
role: ingest
spec:
serviceName: elasticsearch-ingest
selector:
matchLabels:
app: elasticsearch
replicas: 1
template:
metadata:
labels:
app: elasticsearch
role: ingest
spec:
terminationGracePeriodSeconds: 30
# Use the stork scheduler to enable more efficient placement of the pods
#schedulerName: stork
initContainers:
- name: increase-the-vm-max-map-count
image: busybox
#imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch-poc-ingest-pod
image: XXXXXXXXXXXXXXXXXXXXXXX/elasticsearch-oss:7.9.1-amd64
#imagePullPolicy: Always
env:
- name: network.host
value: "0.0.0.0"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: discovery.seed_hosts
value: "elasticsearch-discovery"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "docker-cluster"
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "true"
- name: NODE_DATA
value: "flase"
- name: HTTP_ENABLE
value: "false"
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 9300
name: transport
protocol: TCP
我看到了一个奇怪的事情,我所有的节点都显示“ dimr”。我很困惑这些是正确创建还是错误创建的。我希望有3个Master,2个Data和1个提取节点。