以下是我的数据记录
30/10/2016 17:18:51 [13] 10-Full: L 1490; A 31; F 31; S 31; DL 0; SL 0; DT 5678
30/10/2016 17:18:51 [13] 00-Always: Returning 31 matches
30/10/2016 17:18:51 [13] 30-Normal: Query complete
30/10/2016 17:18:51 [13] 30-Normal: Request completed in 120 ms.
30/10/2016 17:19:12 [15] 00-Always: Request from 120.0.0.1
30/10/2016 17:19:12 [15] 00-Always: action=Query&Text=(("XXXXXX":*/DOCUMENT/DRECONTENT/ObjectInfo/type+OR+"XXXXXX":*/DOCUMENT/.....
30/10/2016 17:19:12 [15] 10-Full: L 2; A 1; F 1; S 0; DL 0; SL 0; DT 5373
30/10/2016 17:19:12 [15] 00-Always: Returning 0 matches
30/10/2016 17:19:12 [15] 30-Normal: Query complete
30/10/2016 17:19:12 [15] 30-Normal: Request completed in 93 ms.
30/10/2016 17:19:20 [17] 00-Always: Request from 120.0.0.1
30/10/2016 17:19:20 [17] 00-Always: action=Query&Text=((PDF:*/DOCUMENT/DRECONTENT/XXXXX/type+AND+XXXXXX.......
30/10/2016 17:19:51 [19] 10-Full: L 255; A 0; F 0; S 0; DL 0; SL 0; DT 5021
30/10/2016 17:19:51 [19] 00-Always: Returning 0 matches
30/10/2016 17:19:51 [19] 30-Normal: Query complete
30/10/2016 17:19:51 [19] 30-Normal: Request completed in 29 ms.
30/10/2016 17:20:44 [27] 00-Always: Request from 120.0.0.1
30/10/2016 17:20:44 [27] 00-Always: action=Query&Tex(Image:*/DOCUMENT/DRECONTENT/ObjectInfo/type+AND+(
30/10/2016 17:20:44 [27] 10-Full: L 13; A 0; F 0; S 0; DL 0; SL 0; DT 5235
30/10/2016 17:20:44 [27] 00-Always: Returning 0 matches
30/10/2016 17:20:44 [27] 30-Normal: Query complete
30/10/2016 17:20:44 [27] 30-Normal: Request completed in 27 ms.
30/10/2016 17:21:09 [25] 00-Always: Request from 120.0.0.1
30/10/2016 17:21:09 [25] 00-Always: action=Query&Text=XXXXXX:*/DOCUMENT/DRECONTENT/ObjectIn
这是我的数据集。它们有数百万。我想分析查询花了多长时间,来自谁以及请求的外观。其余的我想隐藏。
我的预期输出:
30/10/2016;17:19:12;Request completed in 93 ms.;Request from 120.0.0.1;action=Query&Text=((PDF:*/DOCUMENT/DRECONTENT/XXXXX....
30/10/2016;17:18:51;Request completed in 120 ms.;Request from 120.0.0.1;action=Query&Text=(("EOM.CompoundStory":*/DOCUMENT/DRECONTE....
30/10/2016;17:19:51;Request completed in 29 ms.;Request from 120.0.0.1;action=Query&Text=(Image:*/DOCUMENT/DRECONTENT/ObjectInfo/type+AND+((.....
30/10/2016;17:20:44;Request completed in 27 ms.;Request from 120.0.0.1;action=Query&Text=XXXXX:*/DOCUMENT/DRECONT....
如果可能,我想在python中使用pandas解决它。我已经有了一种方法:
import csv
import pandas
with open('query.csv', 'rt') as f, open('leertest.csv', 'w') as outf:
reader = csv.reader(f, delimiter=' ')
writer = csv.writer(outf, delimiter=';', quoting=csv.QUOTE_MINIMAL)
for row in reader:
for field in row:
if field == "Request":
print row
但遗憾的是没有成功。也许你有更好的方法。
我也喜欢看新技术,这些技术不需要很长时间才能学会。
答案 0 :(得分:1)
使用pandas,您可以执行以下操作:
column_headers = ['Date', 'Time', 'Duration', 'IP', 'Request']
df = pd.DataFrame([], columns = column_headers)
df.to_csv('out.log', index=None, sep=';')
# if you don't want to include a header line, skip the previous lines and start here
for df in pd.read_csv('data.log', sep='\s', header=None, chunksize=6):
df.reset_index(drop=True, inplace=True)
df.fillna('', inplace=True)
d = pd.DataFrame([df.loc[3,0], df.loc[3,1], ' '.join(df.loc[3,4:8]), ' '.join(df.loc[4,4:6]), ' '.join(df.loc[5,4:])])
d.T.to_csv('out.log', index=False, header=False, mode='a', sep=';')
或非熊猫方法:
column_headers = ['Date', 'Time', 'Duration', 'IP', 'Request']
with open('data.log') as log, open('out.log', 'w') as out:
out.write(';'.join(column_headers)+'\n') # skip this line if you don't want to include column headers
while True:
try:
lines = [next(log).strip('\n').split(' ',4) for i in range(6)][3:]
out.write(';'.join(lines[0][:2]+[l[4] for l in lines])+'\n')
except StopIteration:
break
上述两种方式的工作方式基本相同。他们从您的文件中读入(我将其命名为data.log
)一次六行(因为从您的示例中,这似乎是每个组的行数)。然后,它使用列表切片或.loc
pandas函数从每个行中获取相关值。最后,它将由;
分隔的相关值附加到输出文件的末尾(我将其命名为out.log
)。
请注意,这两个示例都避免一次将整个文件加载到内存中,因为如果您拥有数百万行数据,这可能会导致问题/确实减慢速度。
修改强>
我更新了上面的示例,以展示如何添加列标题。如果您不想添加列标题,请跳过pandas示例的前三行,并跳过非pandas示例中with
语句后的第一行。