我正在尝试对某些内部服务(网格内部)应用速率限制。
我使用了文档中的示例,并生成了redis速率限制配置,其中包括一个(redis)处理程序,配额实例,配额规范,配额规范绑定以及应用该处理程序的规则。
此redis处理程序:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
配额实例(目前我只想按目的地进行限制):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
配额规范,如果我理解正确,则每个请求收取1:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
所有参与服务都预提取的配额绑定规范。我也尝试了service: "*"
,但也没有执行任何操作。
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
应用处理程序的规则。当前在所有场合(尝试过比赛但也没有改变任何东西):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
所有参与者的VirtualService定义都非常相似:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
问题是什么也没有发生,并且没有速率限制。我从网格内部的Pod中使用curl
进行了测试。 Redis实例为空(db 0上没有键,我认为这是限速将要使用的键),因此我知道它实际上无法对任何限速。
该处理程序似乎已正确配置(如何确定?),因为在混合器(策略)中报告了其中的一些错误。仍然存在一些错误,但没有一个与该问题或配置相关联。提到redis处理程序的唯一一行是这样的:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
但不清楚是否有问题。我认为不是。
这些是我部署后重新加载的其余几行:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
我正在将demo
的{{1}}配置文件与速率限制配合使用。这是在EKS上部署的istio 1.4.0上。
我还尝试了低限制的memquota(这是我们的暂存环境),似乎没有任何作用。无论我超出配置的速率限制多少,我都永远不会得到429。
我不知道如何调试它,看看配置错误导致它什么都不做。
感谢您的帮助。
答案 0 :(得分:2)
我也花了很多时间来尝试解密文档并获得示例样本。
根据文档,他们建议我们启用策略检查:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
但是,当那行不通时,我进行了“ istioctl配置文件转储”,搜索了策略,并尝试了几种设置。
我使用Helm安装并通过了以下操作,然后能够获得所描述的行为:
-设置global.disablePolicyChecks = false \ --set values.pilot.policy.enabled = true \ ===>这样可以正常工作,但是不在文档中。