Kubernetes hpa无法获取内存指标(明确说明时)

时间:2019-11-27 11:53:07

标签: kubernetes

我正在尝试在集群中实现Pod的自动缩放。我尝试使用“虚拟”部署和hpa,但没有任何问题。现在,我正在尝试将其集成到我们的“真实”微服务中,并不断返回

Conditions:
  Type           Status  Reason                   Message
  ----           ------  ------                   -------
  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: missing request for memory
Events:
  Type     Reason                        Age                   From                       Message
  ----     ------                        ----                  ----                       -------
  Warning  FailedGetResourceMetric       18m (x5 over 19m)     horizontal-pod-autoscaler  unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  18m (x5 over 19m)     horizontal-pod-autoscaler  failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  16m (x7 over 18m)     horizontal-pod-autoscaler  failed to get memory utilization: missing request for memory
  Warning  FailedGetResourceMetric       4m38s (x56 over 18m)  horizontal-pod-autoscaler  missing request for memory

这是我的朋友:


apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
 name: #{Name}
 namespace: #{Namespace}
spec:
 scaleTargetRef:
   apiVersion: apps/v1beta1
   kind: Deployment
   name: #{Name}
 minReplicas: 2
 maxReplicas: 5
 metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: #{Name}
  namespace: #{Namespace}
spec:
  replicas: 2
  selector:
    matchLabels:
      app: #{Name}
  template:
    metadata:
      annotations:
        linkerd.io/inject: enabled
      labels:
        app: #{Name}
    spec:
      containers:
      - name: #{Name}
        image: #{image}
        resources:
          limits:
            cpu: 500m
            memory: "300Mi"
          requests:
            cpu: 100m
            memory: "200Mi"
        ports:
        - containerPort: 80
          name: #{ContainerPort}

执行kubectl top pods时,我可以同时看到内存和CPU。当我执行kubectl describe pod时,也可以看到请求和限制。

    Limits:
      cpu:     500m
      memory:  300Mi
    Requests:
      cpu:     100m
      memory:  200Mi

我能想到的唯一区别是我的虚拟服务没有链接器sidecar。

1 个答案:

答案 0 :(得分:1)

要使HPA使用资源度量标准,Pod的每个容器 需要请求给定资源(CPU或内存)。

您的Pod中的Linkerd sidecar容器似乎未定义内存请求(它可能有CPU请求)。这就是HPA抱怨missing request for memory的原因。

但是,您可以使用--proxy-cpu-request--proxy-memory-request injection flags配置Linkerd容器的内存和CPU请求。

另一种可能性是使用these annotations来配置CPU和内存请求:

  • config.linkerd.io/proxy-cpu-request
  • config.linkerd.io/proxy-memory-request

以上述两种方式之一定义内存请求应可使HPA正常工作。

参考: