这是我的prometheus.yml
配置信息:
$ cat /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
rule_files:
- "/etc/prometheus/alert.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
metrics_path: "/metrics"
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['172.30.30.6:9090']
labels:
group: 'monitor'
- job_name: 'cadvisor'
metrics_path: "/metrics"
scrape_interval: 5s
static_configs:
- targets: ['172.30.30.5:8080', '172.30.30.6:8080', '172.30.30.11:8080']
labels:
group: 'monitor'
- job_name: 'node-exporter'
metrics_path: "/metrics"
scrape_interval: 5s
static_configs:
- targets: ['172.30.30.5:9100', '172.30.30.6:9100', '172.30.30.11:9100']
labels:
group: 'monitor'
- job_name: 'alertmanager'
metrics_path: "/metrics"
scrape_interval: 5s
static_configs:
- targets: ['172.30.30.6:9093']
labels:
group: 'monitor'
在Prometheus ui中,所有目标状态都已启动。在UI图表中,未找到数据点。
以下是Prometheus日志的信息。
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape health sample discarded" error="sample timestamp out of order" sample=up{group="monitor", instance="172.30.30.11:8080", job="cadvisor"} => 1 @[1492947990.969] source="scrape.go:586"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape duration sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.11:8080", job="cadvisor"} => 0.122665982 @[1492947990.969] source="scrape.go:589"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape sample count sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.11:8080", job="cadvisor"} => 0.122665982 @[1492947990.969] source="scrape.go:592"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape sample count post-relabeling sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.11:8080", job="cadvisor"} => 0.122665982 @[1492947990.969] source="scrape.go:595"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Error on ingesting out-of-order samples" numDropped=1332 source="scrape.go:533"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape health sample discarded" error="sample timestamp out of order" sample=up{group="monitor", instance="172.30.30.5:9100", job="node-exporter"} => 1 @[1492947991.37] source="scrape.go:586"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape duration sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.5:9100", job="node-exporter"} => 0.067132553 @[1492947991.37] source="scrape.go:589"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape sample count sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.5:9100", job="node-exporter"} => 0.067132553 @[1492947991.37] source="scrape.go:592"
Apr 23 19:46:31 localhost prometheus[31806]: time="2017-04-23T11:46:31Z" level=warning msg="Scrape sample count post-relabeling sample discarded" error="sample timestamp out of order" sample=scrape_duration_seconds{group="monitor", instance="172.30.30.5:9100", job="node-exporter"} => 0.067132553 @[1492947991.37] source="scrape.go:595"