我有一个wget日志文件,并希望解析该文件,以便我可以提取每个日志条目的相关信息。例如IP地址,时间戳,URL等。
下面打印一个示例日志文件。每个条目的行数和信息细节不相同。一致的是每一行的符号。
我能够提取单个行,但我想要一个多维数组(或类似):
import re
f = open('c:/r1/log.txt', 'r').read()
split_log = re.findall('--[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.*', f)
print split_log
print len(split_log)
for element in split_log:
print(element)
####### Start log file example
2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302]
--2014-11-22 10:51:31-- http://www.itb.ie/CurrentStudents/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html'
0K .......... ....... 109K=0.2s
2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429]
--2014-11-22 10:51:32-- h ttp://www.itb.ie/Vacancies/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Vacancies/index.html'
0K .......... .......... .. 118K=0.2s
2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010]
--2014-11-22 10:51:32-- h ttp://www.itb.ie/Location/howtogetthere.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html'
0K .......... ....... 111K=0.2s
答案 0 :(得分:1)
以下是如何提取所需数据并将其存储在元组列表中的方法。
我在这里使用的正则表达并不完美,但它们可以正常使用您的示例数据。我修改了您的原始正则表达式,以使用更具可读性的\d
而不是等效的[0-9]
。我还使用了原始字符串,这通常使得使用正则表达式更容易。
我已将日志数据作为三引号字符串嵌入到我的代码中,因此我不必担心文件处理问题。我注意到日志文件中的某些URL中有空格,例如
h ttp://www.itb.ie/Vacancies/index.html
但是我认为这些空间是副本的副本。粘贴,它们实际上并不存在于真实的日志数据中。如果情况并非如此,那么您的程序将需要做额外的工作来应对这些无关的空间。
我还修改了日志数据中的IP地址,因此它们并不完全相同,只是为了确保findall
找到的每个IP与正确的时间戳正确关联&安培; URL。
#! /usr/bin/env python
import re
log_lines = '''
2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302]
--2014-11-22 10:51:31-- http://www.itb.ie/CurrentStudents/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html'
0K .......... ....... 109K=0.2s
2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429]
--2014-11-22 10:51:32-- http://www.itb.ie/Vacancies/index.html
Connecting to www.itb.ie|193.1.36.25|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Vacancies/index.html'
0K .......... .......... .. 118K=0.2s
2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010]
--2014-11-22 10:51:32-- http://www.itb.ie/Location/howtogetthere.html
Connecting to www.itb.ie|193.1.36.26|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html'
0K .......... ....... 111K=0.2s
'''
time_and_url_pat = re.compile(r'--(\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})--\s+(.*)')
ip_pat = re.compile(r'Connecting to.*\|(.*?)\|')
time_and_url_list = time_and_url_pat.findall(log_lines)
print '\ntime and url\n', time_and_url_list
ip_list = ip_pat.findall(log_lines)
print '\nip\n', ip_list
all_data = [(t, u, i) for (t, u), i in zip(time_and_url_list, ip_list)]
print '\nall\n', all_data, '\n'
for t in all_data:
print t
<强>输出强>
time and url
[('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html')]
ip
['193.1.36.24', '193.1.36.25', '193.1.36.26']
all
[('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26')]
('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24')
('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25')
('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26')
此代码的最后一部分使用列表推导将time_and_url_list和ip_list中的数据重组为单个元组列表,使用zip
内置函数并行处理这两个列表。如果该部分有点难以理解,请告诉我&amp;我会尝试进一步解释。