在电路中断运行后,无法阻止到达端点的请求

时间:2017-05-18 20:14:48

标签: linkerd

我正在尝试通过在同一个k8s集群中部署为pod的简单易错端点来请求验证linkerd的电路中断配置,其中linkerd被部署为守护进程。

我注意到通过观察日志发生了断路,但当我再次尝试击中端点时,我仍然收到来自端点的响应。

设置和测试

我使用下面的配置来设置linkerd及其端点,

https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-egress.yaml

https://raw.githubusercontent.com/zillani/kubex/master/examples/simple-err.yml

端点行为:

端点始终返回500内部服务器错误

失败应计设置:默认 responseClassifier: retryable5XX

代理人卷曲:

http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -L http://<loadblancer-ingress>:8080/simple-err

观察

1。在管理指标

  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/connects" : 505,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/dtab/size.count" : 0,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failed_connect_latency_ms.count" : 0,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failure_accrual/probes" : 8,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failure_accrual/removals" : 2,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failure_accrual/removed_for_ms" : 268542,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failure_accrual/revivals" : 0,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failures" : 505,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/failures/com.twitter.finagle.service.ResponseClassificationSyntheticException" : 505,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/loadbalancer/adds" : 2,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/loadbalancer/algorithm/p2c_least_loaded" : 1.0,
  "rt/outgoing/client/$/io.buoyant.rinet/8080/<loadbalancer-ingress>/loadbalancer/available" : 2.0,

 "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/failures" : 5,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/failures/com.twitter.finagle.service.ResponseClassificationSyntheticException" : 5,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/pending" : 0.0,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/request_latency_ms.count" : 0,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/requests" : 5,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/retries/budget" : 100.0,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/retries/budget_exhausted" : 5,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/retries/per_request.count" : 0,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/retries/total" : 500,
  "rt/outgoing/service/svc/<loadbalancer-ingress>:8080/success" : 0,

2。在日志

I 0518 10:31:15.816 UTC THREAD23 TraceId:e57aa1baa5148cc5: FailureAccrualFactorymarking connection to "$/io.buoyant.rinet/8080/<loadbalancer-ingress>" as dead.

问题

在节点被标记为dead之后,对linkerd的新请求(上面的相同http_proxy命令)正在命中端点并返回响应。

1 个答案:

答案 0 :(得分:0)

Linkerd community forum回答了这个问题。为了完整起见,在这里添加答案:

  

当故障应计(断路器)触发时,端点进入称为Busy的状态。这实际上并不能保证端点不会被使用。大多数负载均衡器(包括默认的P2CLeastLoaded)将只选择最健康的端点。如果在所有端点上触发了故障应计,则意味着它必须在Busy状态中选择一个。