docker-compose具有一个节点导出器的基本Prometheus / Grafana示例

时间:2017-06-20 11:58:58

标签: docker docker-compose grafana prometheus

问题:如何配置Prometheus服务器从节点导出器中提取数据?

我已成功在Grafana设置数据源,并使用以下docker-compose.yml查看默认信息中心。 3项服务是:

  • 普罗米修斯服务器
  • 节点导出程序
  • Grafana

Dockerfile

version: '2'

services:

  prometheus_srv:
    image: prom/prometheus
    container_name: prometheus_server
    hostname: prometheus_server


  prometheus_node:
    image: prom/node-exporter
    container_name: prom_node_exporter
    hostname: prom_node_exporter
    depends_on:
      - prometheus_srv

  grafana:
    image: grafana/grafana
    container_name: grafana_server
    hostname: grafana_server
    depends_on:
      - prometheus_srv

enter image description here

编辑:

我使用的内容与@Daniel Lee分享的内容类似,似乎有效:

# my global config
global:
  scrape_interval:     10s # By default, scrape targets every 15 seconds.
  evaluation_interval: 10s # By default, scrape targets every 15 seconds.

scrape_configs:
  # Scrape Prometheus itself
  - job_name: 'prometheus'
    scrape_interval: 10s
    scrape_timeout: 10s
    static_configs:
      - targets: ['localhost:9090']

  # Scrape the Node Exporter
  - job_name: 'node'
    scrape_interval: 10s
    static_configs:
      - targets: ['prom_node_exporter:9100']

1 个答案:

答案 0 :(得分:1)

YAML configuration file中,以下是Grafana test instance of Prometheus的示例。

泊坞窗文件:

FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/

YAML文件:

# my global config
global:
  scrape_interval:     10s # By default, scrape targets every 15 seconds.
  evaluation_interval: 10s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 10s
    scrape_timeout: 10s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      #- targets: ['localhost:9090', '172.17.0.1:9091', '172.17.0.1:9100', '172.17.0.1:9150']
      - targets: ['localhost:9090', '127.0.0.1:9091', '127.0.0.1:9100', '127.0.0.1:9150']