我的环境中有一个基本的prometheus.yml文件,即..
###
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: prometheus-core
data:
prometheus.yml: |
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10s
rule_files:
- '/etc/prometheus-rules/*.rules'
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
现在,如果我向我的环境添加新节点,我的prometheus.yml文件应该自动更新,并将节点添加到下面的目标,即
###
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: prometheus-core
data:
prometheus.yml: |
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10s
rule_files:
- '/etc/prometheus-rules/*.rules'
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090','12.10.17.6:9100','12.10.17.19:9100']
任何人都可以建议我如何实现这个目标吗?
答案 0 :(得分:2)
Prometheus支持Kubernetes服务发现机制,有关详细信息,请参阅documentation。
因此,您应该添加与此类似的部分,而不是static_configs
部分:
scrape_configs:
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
...
请参阅此example configuration file了解其完成情况。