Kubernetes中的Elasticsearch高可用设置

时间:2019-03-18 07:25:36

标签: elasticsearch kubernetes high-availability

我们想在Kubernetes中设置Elasticsearch高可用性设置。我们想部署以下对象,并希望对其进行独立缩放

  1. 主豆荚
  2. 数据荚
  3. 客户吊舱

如果实施了这种设置,请分享您的建议。最好使用开源工具

1 个答案:

答案 0 :(得分:0)

有关建议的体系结构,请参见以下几点:

  1. Elasticsearch主节点不需要持久性存储,因此请使用Deployment来管理这些节点。使用服务在主服务器之间进行负载平衡。

使用ConfigMap来管理其设置。像这样:

 apiVersion: v1
 kind: Service
 metadata:
   name: elasticsearch-discovery
   labels:
     component: elasticsearch
     role: master
     version: v6.5.0 // or whatever version you require
 spec:
   selector:
     component: elasticsearch
     role: master
     version: v6.5.0
   ports:
   - name: transport
     port: 9300 // no need to expose port 9200, as master nodes don't need it
     protocol: TCP
   clusterIP: None
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: elasticsearch-master-configmap
 data:
   elasticsearch.yml: |
     # these should get you going
     # if you want more fine-grained control, feel free to add other ES settings
     cluster.name: "${CLUSTER_NAME}"
     node.name: "${NODE_NAME}"

     network.host: 0.0.0.0

     # (no_master_eligible_nodes / 2) + 1
     discovery.zen.minimum_master_nodes: 2
     discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}

     node.master: true
     node.data: false
     node.ingest: false
 ---
 apiVersion: apps/v1beta1
 kind: Deployment
 metadata:
   name: elasticsearch-master
   labels:
     component: elasticsearch
     role: master
     version: v6.5.0
 spec:
   replicas: 3 // 3 is the recommended minimum
   template:
     metadata:
       labels:
         component: elasticsearch
         role: master
         version: v6.5.0
     spec:
       affinity:
         // you can also add node affinity in case you have a specific node pool
         podAntiAffinity:
           // make sure 2 ES processes don't end up on the same machine
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
               matchExpressions:
               - key: component
                 operator: In
                 values:
                 - elasticsearch
               - key: role
                 operator: In
                 values:
                 - master
             topologyKey: kubernetes.io/hostname
       initContainers:
         # just basic ES environment configuration
       - name: init-sysctl
         image: busybox:1.27.2
         command:
         - sysctl
         - -w
         - vm.max_map_count=262144
         securityContext:
           privileged: true
       containers:
       - name: elasticsearch-master
         image: // your preferred image 
         imagePullPolicy: Always
         env:
         - name: NODE_NAME
           valueFrom:
             fieldRef:
               fieldPath: metadata.name
         - name: CLUSTER_NAME
           value: elasticsearch-cluster
         - name: DISCOVERY_SERVICE
           value: elasticsearch-discovery
         - name: ES_JAVA_OPTS
           value: -Xms256m -Xmx256m // or more, if you want
         ports:
         - name: tcp-transport
           containerPort: 9300
         volumeMounts:
         - name: configmap
           mountPath: /etc/elasticsearch/elasticsearch.yml
           subPath: elasticsearch.yml
         - name: storage
           mountPath: /usr/share/elasticsearch/data
       volumes:
       - name: configmap
         configMap:
           name: elasticsearch-master-configmap
       - emptyDir:
           medium: ""
         name: storage

也可以以非常相似的方式部署客户端节点,因此我将避免为此添加代码。

  1. 数据节点有点特殊:您需要配置持久性存储,因此必须使用StatefulSets。使用PersistentVolumeClaims为这些Pod创建磁盘。我会做这样的事情:

    apiVersion:v1  种类:服务  元数据:    名称:elasticsearch    标签:      组成部分:elasticsearch      角色:数据      版本:v6.5.0  规格:    端口:

    • 名称:http  端口:9200 //在此示例中,数据节点被用作客户端节点
    • 端口:9300  名称:运输 选择器:  组成部分:elasticsearch  角色:数据  版本:v6.5.0 类型:ClusterIP


      apiVersion:v1 种类:ConfigMap 元数据: 名称:elasticsearch-data-configmap 数据: elasticsearch.yml:|  cluster.name:“${CLUSTER_NAME}”  node.name:“ $ {NODE_NAME}”

      network.host:0.0.0.0

      #(no_master_eligible_nodes / 2)+ 1  Discovery.zen.minimum_master_nodes:2  Discovery.zen.ping.unicast.hosts:$ {DISCOVERY_SERVICE}

      node.master:否  node.data:true  node.ingest:false


      apiVersion:apps / v1 种类:StatefulSet 元数据: 名称:elasticsearch-data 标签:  组成部分:elasticsearch  角色:数据  版本:v6.5.0 规格: serviceName:elasticsearch 副本:1 //选择适当的数字 选择器:  matchLabels:    组成部分:elasticsearch    角色:数据    版本:v6.5.0 模板:  元数据:    标签:      组成部分:elasticsearch      角色:数据      版本:v6.5.0  规格:    亲和力:      #再一次,我建议使用nodeAffinity      podAntiAffinity:        requiredDuringSchedulingIgnoredDuringExecution:        -labelSelector:            matchExpressions:            - 关键部件              运算符:在              值:              -elasticsearch            -关键:角色              运算符:在              值:              -数据          topologyKey:kubernetes.io/主机名    TerminationGracePeriodSeconds:180    initContainers:

      • 名称:init-sysctl  图片:busybox:1.27.2  命令:
        • sysctl
        • -w
        • vm.max_map_count = 262144 securityContext: 特权:真 容器:
      • 名称:elasticsearch-production-container  image://用于主节点的同一图像  imagePullPolicy:始终  环境:
        • 名称:NODE_NAME valueFrom:  fieldRef:    fieldPath:metadata.name
        • 名称:CLUSTER_NAME 值:elasticsearch-cluster
        • 名称:DISCOVERY_SERVICE 值:elasticsearch-discovery
        • 名称:ES_JAVA_OPTS 值:-Xms31g -Xmx31g //不超过32 GB !!! 端口:
        • 名称:http containerPort:9200
        • 名称:tcp-transport containerPort:9300 volumeMounts:
        • 名称:configmap mountPath:/etc/elasticsearch/elasticsearch.yml 子路径:elasticsearch.yml
        • 名称:elasticsearch-node-pvc mountPath:/ usr / share / elasticsearch / data 准备情况: httpGet:  路径:/ _ cluster / health?local = true  端口:9200 initialDelaySeconds:15 livenessProbe: 执行:  命令:
          • / usr / bin / pgrep
          • -x
          • “ java” initialDelaySeconds:15 资源: 要求: //根据您的需要进行调整 内存:“ 32Gi” cpu:“ 11” 数量:
      • 名称:configmap  configMap:    名称:elasticsearch-data-configmap volumeClaimTemplates:
    • 元数据:    名称:elasticsearch-node-pvc  规格:    accessModes:[“ ReadWriteOnce”]    storageClassName://这取决于您的K8s环境    资源:      要求:        storage:350Gi //为每个ES数据节点选择所需的存储大小

希望这会有所帮助!