Prometheus导出器-直接检测与自定义收集器

时间:2019-08-01 08:11:52

标签: go prometheus

我目前正在为遥测网络应用程序编写Prometheus导出器。

我已经在Writing Exporters处阅读了文档,虽然我了解实现自定义收集器以避免竞争条件的用例,但是我不确定我的用例是否适合直接检测。

基本上,网络指标是由网络设备通过gRPC流式传输的,因此我的出口商只需接收它们,而不必有效地废弃它们。

我已将直接检测与以下代码结合使用:

  • 我使用promauto软件包声明指标以保持代码紧凑:
package metrics

import (
    "github.com/lucabrasi83/prom-high-obs/proto/telemetry"
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promauto"
)

var (
    cpu5Sec = promauto.NewGaugeVec(

        prometheus.GaugeOpts{
            Name: "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
            Help: "The IOSd daemon CPU busy percentage over the last 5 seconds",
        },
        []string{"node"},
    )
  • 下面是我简单地从gRPC协议缓冲区解码消息中设置指标值的方法:
cpu5Sec.WithLabelValues(msg.GetNodeIdStr()).Set(float64(val))
  • 最后,这是我的主循环,该循环主要处理遥测gRPC流以获取我感兴趣的度量标准:
for {

        req, err := stream.Recv()
        if err == io.EOF {
            return nil
        }
        if err != nil {
            logging.PeppaMonLog(
                "error",
                fmt.Sprintf("Error while reading client %v stream: %v", clientIPSocket, err))

            return err
        }

        data := req.GetData()

        msg := &telemetry.Telemetry{}

        err = proto.Unmarshal(data, msg)

        if err != nil {
            log.Fatalln(err)
        }

        if !logFlag {
            logging.PeppaMonLog(
                "info",
                fmt.Sprintf(
                    "Telemetry Subscription Request Received - Client %v - Node %v - YANG Model Path %v",
                    clientIPSocket, msg.GetNodeIdStr(), msg.GetEncodingPath(),
                ),
            )
        }
        logFlag = true

        // Flag to determine whether the Telemetry device streams accepted YANG Node path
        yangPathSupported := false

        for _, m := range metrics.CiscoMetricRegistrar {
            if msg.EncodingPath == m.EncodingPath {

                yangPathSupported = true
                go m.RecordMetricFunc(msg)
            }
        }
}
  • 对于我感兴趣的每个指标,我都会向记录指标函数(m.RecordMetricFunc)注册它,该函数将协议缓冲区消息作为自变量,如下所示。
package metrics

import "github.com/lucabrasi83/prom-high-obs/proto/telemetry"

var CiscoMetricRegistrar []CiscoTelemetryMetric

type CiscoTelemetryMetric struct {
    EncodingPath     string
    RecordMetricFunc func(msg *telemetry.Telemetry)
}

  • 然后我使用init函数进行实际注册:


func init() {
    CiscoMetricRegistrar = append(CiscoMetricRegistrar, CiscoTelemetryMetric{
        EncodingPath:     CpuYANGEncodingPath,
        RecordMetricFunc: ParsePBMsgCpuBusyPercent,
    })
}

我使用Grafana作为前端,到目前为止,在直接在设备上关联Prometheus暴露指标VS检查指标时,还没有发现任何特殊差异。

所以我想了解这是否遵循Prometheus最佳实践,还是我应该遵循定制的收集器路线。

谢谢。

1 个答案:

答案 0 :(得分:2)

您未遵循最佳做法,因为您使用的是链接到文章的警告所针对的全局指标。使用当前的实现,仪表板将在设备断开连接(或更确切地说,直到重新启动导出器)之后永远显示CPU指标的任意值和恒定值。

相反,RPC方法应维护一组本地指标,并在该方法返回后将其删除。这样,断开连接时,设备的指标就会从抓取输出中消失。

这是执行此操作的一种方法。它使用包含当前活动指标的地图。每个地图元素都是一个特定流(我理解对应于一个设备)的一组指标。流结束后,该条目将被删除。

package main

import (
    "sync"

    "github.com/prometheus/client_golang/prometheus"
)

// Exporter is a prometheus.Collector implementation.
type Exporter struct {
    // We need some way to map gRPC streams to their metrics. Using the stream
    // itself as a map key is simple enough, but anything works as long as we
    // can remove metrics once the stream ends.
    sync.Mutex
    Metrics map[StreamServer]*DeviceMetrics
}

type DeviceMetrics struct {
    sync.Mutex

    CPU prometheus.Metric
}

// Globally defined descriptions are fine.
var cpu5SecDesc = prometheus.NewDesc(
    "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
    "The IOSd daemon CPU busy percentage over the last 5 seconds",
    []string{"node"},
    nil, // constant labels
)

// Collect implements prometheus.Collector.
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
    // Copy current metrics so we don't lock for very long if ch's consumer is
    // slow.
    var metrics []prometheus.Metric

    e.Lock()
    for _, deviceMetrics := range e.Metrics {
        deviceMetrics.Lock()
        metrics = append(metrics,
            deviceMetrics.CPU,
        )
        deviceMetrics.Unlock()
    }
    e.Unlock()

    for _, m := range metrics {
        if m != nil {
            ch <- m
        }
    }
}

// Describe implements prometheus.Collector.
func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
    ch <- cpu5SecDesc
}

// Service is the gRPC service implementation.
type Service struct {
    exp *Exporter
}

func (s *Service) RPCMethod(stream StreamServer) (*Response, error) {
    deviceMetrics := new(DeviceMetrics)

    s.exp.Lock()
    s.exp.Metrics[stream] = deviceMetrics
    s.exp.Unlock()

    defer func() {
        // Stop emitting metrics for this stream.
        s.exp.Lock()
        delete(s.exp.Metrics, stream)
        s.exp.Unlock()
    }()

    for {
        req, err := stream.Recv()
        // TODO: handle error

        var msg *Telemetry = parseRequest(req) // Your existing code that unmarshals the nested message.

        var (
            metricField *prometheus.Metric
            metric      prometheus.Metric
        )

        switch msg.GetEncodingPath() {
        case CpuYANGEncodingPath:
            metricField = &deviceMetrics.CPU
            metric = prometheus.MustNewConstMetric(
                cpu5SecDesc,
                prometheus.GaugeValue,
                ParsePBMsgCpuBusyPercent(msg), // func(*Telemetry) float64
                "node", msg.GetNodeIdStr(),
            )
        default:
            continue
        }

        deviceMetrics.Lock()
        *metricField = metric
        deviceMetrics.Unlock()
    }

    return nil, &Response{}
}