从多个数据源

时间:2017-10-18 19:02:05

标签: python dictionary networking nested

完全重写,因为原帖不明确。我正在尝试做的是逐行解析一些数据并创建一个字典。我认为有更好的方法来组织这些数据。我试图解决这个问题的原始方式没有说明几件事,所以我提出了这个问题。我逐行循环遍历服务策略输出,按接口,策略名称汇总数据,然后拉出队列,丢弃和无缓冲区丢弃。我遇到的问题是它没有考虑额外的策略,因此原始的数据传递被覆盖。

服务政策输出:

GigabitEthernet11/1

Service-policy output: Gi11_1

Counters last updated 7191104 seconds ago

Class-map: class-default (match-any)
  0 packets, 0 bytes
  30 second offered rate 0000 bps, drop rate 0000 bps
  Match: any 
  Queueing
  queue limit 33025 packets
  (queue depth/total drops/no-buffer drops) 0/0/0
  (pkts output/bytes output) 0/0
  shape (average) cir 500000000, bc 2000000, be 2000000
  target shape rate 500000000

  Service-policy : child

  Counters last updated 7191104 seconds ago

    Class-map: class-default (match-any)
      0 packets, 0 bytes
      30 second offered rate 0000 bps, drop rate 0000 bps
      Match: any 
      Queueing
      queue limit 33025 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 0/0
      bandwidth remaining ratio 100 

for ints, int_strings in zip(int_names, int_output):
    counts.setdefault(ints, {})

    for line in int_strings.splitlines():
        matchpolicy = re.search(r'(Service-policy.*)', line)
        matchdrops = re.findall(r'total drops.*', line)
        if matchpolicy:
            spolicies = matchpolicy.group(0)
            counts[ints]['Policy'] = spolicies
        if matchdrops:
            regx = re.search(r'\s(\d+)\/(\d+)\/(\d+)', line)
            counts[ints]['queue'] = int(regx.group(1))
            counts[ints]['drops'] = int(regx.group(2))
            counts[ints]['no-buffer'] = int(regx.group(3))

return counts

我尝试创建一个具有额外深度级别的字典,但是我在计数[ints] [spolicies]行上收到了一个关键错误。根据我的阅读,我认为这是嵌套词典的工作方式,但我认为我误解了。

for ints, int_strings in zip(int_names, int_output):
    counts.setdefault(ints, {})

    for line in int_strings.splitlines():
        matchpolicy = re.search(r'(Service-policy.*)', line)
        matchdrops = re.findall(r'total drops.*', line)
        if matchpolicy:
            spolicies = matchpolicy.group(0)
            counts[ints][spolicies] 
        if matchdrops:
            regx = re.search(r'\s(\d+)\/(\d+)\/(\d+)', line)
            counts[ints][spolicies]['queue'] = int(regx.group(1))
            counts[ints][spolicies]['drops'] = int(regx.group(2))
            counts[ints][spolicies]['no-buffer'] = int(regx.group(3))

return counts

无论哪种方式,我都假设可能有更好的方法来组织这些数据,以便我以后可以更轻松地完成它。有什么想法吗?

1 个答案:

答案 0 :(得分:0)

labels = ["depth", "drops", "buffer_drops"]
values = ['0', '14996', '0', '0', '2100', '0']
keys=['Gi1','Gi2']

values_grouped_by_3 = list(zip(*[iter(values)]*3))
data = dict(zip(keys,[dict(zip(labels,vals)) for vals in values_grouped_by_3]))

如果您想要更多教程和实际帮助,请先努力并发布您的努力以及您的预期和产出