统计数据后20%的损失值

时间:2013-06-28 14:17:47

标签: graph metrics graphite statsd

我需要监督实时应用程序。此应用程序每秒接收60个连接,每个连接使用53个指标。

所以我的模拟客户端发送了3180个指标人员。 我需要lower,upper,average,median和count_ps值。这就是为什么我使用“计时”类型。

当我在statsd结尾处查看一个指标的count_ps时,我只有40个值,而不是60个。 我没有找到有关statsd容量的信息。也许我超载它^^

你能帮助我吗,我的选择是什么?

我无法减少指标的名词,但我不需要“计时”类型提供的所有信息。我可以限制“时间”吗?

谢谢!

我的配置:

1)cat storage-schemas.conf

# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
#  [name]
#  pattern = regex
#  retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...

# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
[carbon]
pattern = ^carbon\.
retentions = 60:90d

[stats]
pattern = ^application.*
retentions = 60s:7d

2)cat dConfig.js

{
  graphitePort: 2003
, graphiteHost: "127.0.0.1"
, port: 8125
, backends: [ "./backends/graphite", "./backends/console" ]
, flushInterval: 60000
, debug: true
, graphite: { legacyNamespace: false, globalPrefix: "", prefixGauge: "", prefixCounter: "", prefixTimer: "", prefixSet: ""}
}

3)cat storage-aggregation.conf

# Aggregation methods for whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds
#
#  [name]
#  pattern = <regex>
#  xFilesFactor = <float between 0 and 1>
#  aggregationMethod = <average|sum|last|max|min>
#
#  name: Arbitrary unique name for the rule
#  pattern: Regex pattern to match against the metric name
#  xFilesFactor: Ratio of valid data points required for aggregation to the next                                                                              retention to occur
#  aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.lower$
xFilesFactor = 0.1
aggregationMethod = min

[max]
pattern = \.upper$
xFilesFactor = 0.1
aggregationMethod = max

[sum]
pattern = \.sum$
xFilesFactor = 0
aggregationMethod = sum

[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum

[count_legacy]
pattern = ^stats_counts.*
xFilesFactor = 0
aggregationMethod = sum

[default_average]
pattern = .*
xFilesFactor = 0.3

4)客户:

#!/usr/bin/env python
import time
import random
import statsd
import math

c = statsd.StatsClient('localhost',8125)
k = 0
nbData = 60
pause = 1

while True :
      print k
      k += pause
      tps1 = time.clock()
      for j in range (nbData):
                digit = j%10 + k*10 + math.sin(j/500)
                c.timing('TPS.global', digit)
                c.timing('TPS.interne', digit)
                c.timing('TPS.externe', digit)
                for i in range(5):
                        c.timing('TPS.a.'+str(i), digit)
                        c.timing('TPS.b.'+str(i), digit)
                        c.timing('TPS.c.'+str(i), digit)
                        c.timing('TPS.d.'+str(i), digit)
                        c.timing('TPS.e.'+str(i), digit)
                        c.timing('CR.a.'+str(i), digit)
                        c.timing('CR.b.'+str(i), digit)
                        c.timing('CR.c.'+str(i), digit)
                        c.timing('CR.d.'+str(i), digit)
                        c.timing('CR.e.'+str(i), digit)
      tps2 = time.clock()
      print 'temps = ' + str(tps2 - tps1)
      if k >= 60:
          k = 0
      if pause-tps2 + tps1 < 1:
         time.sleep(pause-tps2 + tps1)

编辑:添加客户端代码

2 个答案:

答案 0 :(得分:0)

没有更多的背景,很难说,可能会发生什么。在向StatsD发送数据时是否使用了采样?您运行StatsD的硬件是什么?您的模拟是否都在localhost上?你是否在有损连接上运行它?

目前无法将时间指标限制为仅某些类型。

很抱歉没有更直接的帮助。如果您的问题仍然存在,请考虑在Freenode IRC上放入#statsd并询问。

答案 1 :(得分:0)

您的CARBON_METRIC_INTERVAL设置为什么?我怀疑它需要与StatsD flushInterval匹配。