我有一些代码来解析apache日志文件(start_search
,而end_search
是apache日志中找到的格式的日期字符串):
with open("/var/log/apache2/access.log",'r') as log:
from itertools import takewhile, dropwhile
s_log = dropwhile(lambda L: start_search not in L, log)
e_log = takewhile(lambda L: end_search not in L, s_log)
query = [line for line in e_log if re.search(r'GET /(.+veggies|.+fruits)',line)]
import csv
query_dict = csv.DictReader(query,fieldnames=('ip','na-1','na-2','time', 'zone', 'url', 'refer', 'client'),quotechar='"',delimiter=" ")
import re
veggies = [ x for x in query_dict if re.search('veggies',x['url']) ]
fruits = [ x for x in query_dict if re.search('fruits',x['url']) ]
第二个列表生成器始终为空;也就是说,如果我切换最后两行的顺序:
fruits = [ x for x in query_dict if re.search('fruits',x['url']) ]
veggies = [ x for x in query_dict if re.search('veggies',x['url']) ]
第二个列表总是空的。
为什么呢? (以及如何填充fruits
和veggies
列表?)
答案 0 :(得分:7)
你只能循环迭代器一次; query_dict
是一个迭代器,一旦扫描veggies
,就无法再次迭代搜索fruits
。
请勿在此处使用列表推导。循环遍历query_dict
一次,检查veggies
和fruits
的每个条目:
veggies = []
fruits = []
for x in query_dict:
if re.search('veggies',x['url']):
veggies.append(x)
if re.search('fruits',x['url']):
fruits.append(x)
替代方案是:
重新创建csv.DictReader()
列表的fruits
对象:
query_dict = csv.DictReader(query,fieldnames=('ip','na-1','na-2','time', 'zone', 'url', 'refer', 'client'),quotechar='"',delimiter=" ")
veggies = [ x for x in query_dict if re.search('veggies',x['url']) ]
query_dict = csv.DictReader(query,fieldnames=('ip','na-1','na-2','time', 'zone', 'url', 'refer', 'client'),quotechar='"',delimiter=" ")
fruits = [ x for x in query_dict if re.search('fruits',x['url']) ]
这是双重工作;你遍历整个数据集两次。
使用itertools.tee()
来“克隆”迭代器:
from itertools import tee
veggies_query_dict, fruits_query_dict = tee(query_dict)
veggies = [ x for x in veggies_query_dict if re.search('veggies',x['url']) ]
fruits = [ x for x in fruits_query_dict if re.search('fruits',x['url']) ]
这最终缓存query_dict
缓冲区中的所有tee
,需要两倍的内存用于同一任务,直到fruits
再次清空缓冲区。