EKS-节点标签

时间:2018-07-19 22:03:12

标签: kubernetes amazon-eks

在EKS中部署工作程序节点时,是否可以添加节点标签。我看不到CF模板中可用于工作程序节点的选项。

EKS-CF-Workers

我现在看到的唯一选项是使用kubectl label命令添加标签,这是集群建立后的设置。但是,需要具有完全自动化的功能,这意味着在群集部署后会自动部署应用程序,并且标签有助于实现隔离。

4 个答案:

答案 0 :(得分:14)

借助AWS提供的新的经EKS优化的AMI(amazon-eks-node-vXX)和Cloudformation模板重构,现在可以添加节点标签,就像向 BootstrapArguments 参数提供参数一样简单 [amazon-eks-nodegroup.yaml] [1] Cloudfomation模板的模板。例如-kubelet-extra-args --node-labels = my-key = my-value 。有关更多详细信息,请参阅AWS公告:对Amazon EKS Worker的改进节点配置

答案 1 :(得分:3)

您需要在String.intern中添加配置,并对kubelet使用user_data选项。这是一个包含node_labels的示例user_data:

--node-labels

相关行是:

NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  AssociatePublicIpAddress: 'true'
  IamInstanceProfile: !Ref NodeInstanceProfile
  ImageId: !Ref NodeImageId
  InstanceType: !Ref NodeInstanceType
  KeyName: !Ref KeyName
  SecurityGroups:
  - !Ref NodeSecurityGroup
  UserData:
    Fn::Base64:
      Fn::Join: [
        "",
        [
          "#!/bin/bash -xe\n",
          "CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki", "\n",
          "CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt", "\n",
          "MODEL_DIRECTORY_PATH=~/.aws/eks", "\n",
          "MODEL_FILE_PATH=$MODEL_DIRECTORY_PATH/eks-2017-11-01.normal.json", "\n",
          "mkdir -p $CA_CERTIFICATE_DIRECTORY", "\n",
          "mkdir -p $MODEL_DIRECTORY_PATH", "\n",
          "curl -o $MODEL_FILE_PATH https://s3-us-west-2.amazonaws.com/amazon-eks/1.10.3/2018-06-05/eks-2017-11-01.normal.json", "\n",
          "aws configure add-model --service-model file://$MODEL_FILE_PATH --service-name eks", "\n",
          "aws eks describe-cluster --region=", { Ref: "AWS::Region" }," --name=", { Ref: ClusterName }," --query 'cluster.{certificateAuthorityData: certificateAuthority.data, endpoint: endpoint}' > /tmp/describe_cluster_result.json", "\n",
          "cat /tmp/describe_cluster_result.json | grep certificateAuthorityData | awk '{print $2}' | sed 's/[,\"]//g' | base64 -d >  $CA_CERTIFICATE_FILE_PATH", "\n",
          "MASTER_ENDPOINT=$(cat /tmp/describe_cluster_result.json | grep endpoint | awk '{print $2}' | sed 's/[,\"]//g')", "\n",
          "INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)", "\n",
          "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /var/lib/kubelet/kubeconfig", "\n",
          "sed -i s,CLUSTER_NAME,", { Ref: ClusterName }, ",g /var/lib/kubelet/kubeconfig", "\n",
          "sed -i s,REGION,", { Ref: "AWS::Region" }, ",g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,MAX_PODS,", { "Fn::FindInMap": [ MaxPodsPerNode, { Ref: NodeInstanceType }, MaxPods ] }, ",g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service", "\n",
          "DNS_CLUSTER_IP=10.100.0.10", "\n",
          "if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi", "\n",
          "sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g  /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig" , "\n",
          "sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service" , "\n"
          "sed -i s,INTERNAL_IP/a,--node-labels tier=development,g  /etc/systemd/system/kubelet.service" , "\n"
          "systemctl daemon-reload", "\n",
          "systemctl restart kubelet", "\n",
          "/opt/aws/bin/cfn-signal -e $? ",
          "         --stack ", { Ref: "AWS::StackName" },
          "         --resource NodeGroup ",
          "         --region ", { Ref: "AWS::Region" }, "\n"
        ]
      ]

警告:我尚未对此进行测试,但是我做了类似的事情,并且效果很好

答案 2 :(得分:1)

我设法使其与下一个sed表达式一起工作:

sed -i '/--node-ip/ a \ \ --node-labels group=node \\' /etc/systemd/system/kubelet.service

答案 3 :(得分:1)

如果使用的是eksctl,则可以将标签添加到节点组:

像这样:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-cluster
  region: eu-north-1

nodeGroups:
  - name: ng-1-workers
    labels: { role: workers }
    instanceType: m5.xlarge
    desiredCapacity: 10
    privateNetworking: true
  - name: ng-2-builders
    labels: { role: builders }
    instanceType: m5.2xlarge
    desiredCapacity: 2
    privateNetworking: true

有关更多信息,请参见https://eksctl.io/usage/managing-nodegroups/