Kubernetes自动缩放器-NotTriggerScaleUp'pod不会触发放大(如果添加了新节点,则不适合)

时间:2019-09-20 19:31:45

标签: kubernetes autoscaling amazon-eks

我想在每个节点上运行一个“作业”,一次在一个节点上运行一个容器。

  • 我已经安排了很多工作
  • 我现在有一堆待处理的豆荚
  • 我希望这些待处理的Pod现在触发节点放大事件(会发生)

非常喜欢这个问题(由我自己制作):Kubernetes reports "pod didn't trigger scale-up (it wouldn't fit if a new node is added)" even though it would?

但是在这种情况下,它确实应该适合新节点。

如何诊断Kubernetes为什么确定不可能发生节点扩展事件?

我的工作Yaml:

apiVersion: batch/v1
kind: Job
metadata:
  name: example-job-${job_id}
  labels:
    job-in-progress: job-in-progress-yes
spec:
  template:
    metadata:
      name: example-job-${job_id}
    spec:
      # this bit ensures a job/container does not get scheduled along side another - 'anti' affinity
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: kubernetes.io/hostname 
            labelSelector:
              matchExpressions:
              - key: job-in-progress
                operator: NotIn
                values:
                - job-in-progress-yes
      containers:
      - name: buster-slim
        image: debian:buster-slim
        command: ["bash"]
        args: ["-c", "sleep 60; echo ${echo_param}"]
      restartPolicy: Never

自动定标器日志:

I0920 19:27:58.190751       1 static_autoscaler.go:128] Starting main loop
I0920 19:27:58.261972       1 auto_scaling_groups.go:320] Regenerating instance to ASG map for ASGs: []
I0920 19:27:58.262003       1 aws_manager.go:152] Refreshed ASG list, next refresh after 2019-09-20 19:28:08.261998185 +0000 UTC m=+302.102284346
I0920 19:27:58.262092       1 static_autoscaler.go:261] Filtering out schedulables
I0920 19:27:58.264212       1 static_autoscaler.go:271] No schedulable pods
I0920 19:27:58.264246       1 scale_up.go:262] Pod default/example-job-21-npv6p is unschedulable
I0920 19:27:58.264252       1 scale_up.go:262] Pod default/example-job-28-zg4f8 is unschedulable
I0920 19:27:58.264258       1 scale_up.go:262] Pod default/example-job-24-fx9rd is unschedulable
I0920 19:27:58.264263       1 scale_up.go:262] Pod default/example-job-6-7mvqs is unschedulable
I0920 19:27:58.264268       1 scale_up.go:262] Pod default/example-job-20-splpq is unschedulable
I0920 19:27:58.264273       1 scale_up.go:262] Pod default/example-job-25-g5mdg is unschedulable
I0920 19:27:58.264279       1 scale_up.go:262] Pod default/example-job-16-wtnw4 is unschedulable
I0920 19:27:58.264284       1 scale_up.go:262] Pod default/example-job-7-g89cp is unschedulable
I0920 19:27:58.264289       1 scale_up.go:262] Pod default/example-job-8-mglhh is unschedulable
I0920 19:27:58.264323       1 scale_up.go:304] Upcoming 0 nodes
I0920 19:27:58.264370       1 scale_up.go:420] No expansion options
I0920 19:27:58.264511       1 static_autoscaler.go:333] Calculating unneeded nodes
I0920 19:27:58.264533       1 utils.go:474] Skipping ip-10-0-1-118.us-west-2.compute.internal - no node group config
I0920 19:27:58.264542       1 utils.go:474] Skipping ip-10-0-0-65.us-west-2.compute.internal - no node group config
I0920 19:27:58.265063       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-25-g5mdg", UID:"d2e0e48c-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7256", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265090       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-8-mglhh", UID:"c7d3ce78-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7267", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265101       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-6-7mvqs", UID:"c6a5b0e4-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7273", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265110       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-20-splpq", UID:"cfeb9521-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7259", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265363       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-21-npv6p", UID:"d084c067-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7275", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265384       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-16-wtnw4", UID:"ccbe48e0-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7265", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265490       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-28-zg4f8", UID:"d4afc868-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7269", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265515       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-24-fx9rd", UID:"d24975e5-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7271", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 
I0920 19:27:58.265685       1 static_autoscaler.go:360] Scale down status: unneededOnly=true lastScaleUpTime=2019-09-20 19:23:23.822104264 +0000 UTC m=+17.662390361 lastScaleDownDeleteTime=2019-09-20 19:23:23.822105556 +0000 UTC m=+17.662391653 lastScaleDownFailTime=2019-09-20 19:23:23.822106849 +0000 UTC m=+17.662392943 scaleDownForbidden=false isDeleteInProgress=false
I0920 19:27:58.265910       1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-7-g89cp", UID:"c73cfaea-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7263", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 

3 个答案:

答案 0 :(得分:2)

我在自动缩放器上定义了错误的参数:

我不得不修改node-group-auto-discoverynodes参数。

        - ./cluster-autoscaler
        - --cloud-provider=aws
        - --namespace=default
        - --scan-interval=25s
        - --scale-down-unneeded-time=30s
        - --nodes=1:20:terraform-eks-demo20190922161659090500000007--terraform-eks-demo20190922161700651000000008
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/example-job-runner
        - --logtostderr=true
        - --stderrthreshold=info
        - --v=4

答案 1 :(得分:1)

我错误地将这些添加为节点标签 k8s.io/cluster-autoscaler/enabledk8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>

但它们实际上应该是工作组中节点上的标签。

具体来说,如果您在 Terraform 中使用 AWS EKS 模块 -

  workers_group_defaults = {
    tags = [{
        key                 = "k8s.io/cluster-autoscaler/enabled"
        value               = "TRUE"
        propagate_at_launch = true
      },{
        key                 = "k8s.io/cluster-autoscaler/${var.cluster_name}"
        value               = "owned"
        propagate_at_launch = true
      }]
  }

答案 2 :(得分:0)

我也遇到了这个问题。我没有看到这个超级文献记录,您认为应该在哪里。这是主要README.md

上的详细说明

AWS-使用带标签的实例组的自动发现

自动发现按如下方式查找ASGs标签并对其进行自动管理 根据ASG中指定的最小和最大大小。 仅cloudProvider=aws

  • 默认情况下:.Values.autoDiscovery.tags用与k8s.io/cluster-autoscaler/enabled相匹配的密钥标记ASG。 k8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>
  • 验证IAM Permissions
  • 设置autoDiscovery.clusterName=<YOUR CLUSTER NAME>
  • 设置awsRegion=<YOUR AWS REGION>
  • 如果要use AWS credentials directly instead of an instance role,请设置awsAccessKeyID=<YOUR AWS KEY ID>awsSecretAccessKey=<YOUR AWS SECRET KEY>

$ helm install my-release autoscaler/cluster-autoscaler-chart --set autoDiscovery.clusterName=<CLUSTER NAME>

我的问题是我没有同时指定两个标签,而是仅指定了k8s.io/cluster-autoscaler/enabled标签。现在,我认为这很有意义,就好像您在同一帐户中拥有多个k8s集群一样,集群自动缩放器将需要知道实际缩放哪个ASG。