我正在尝试创建一个脚本,该脚本可自动运行以下tshark命令。
tshark -T fields -e _ws.col.No -e _ws.col.Time -e _ws.col.Source -e _ws.col.Destination -e _ws.col.Protocol -e _ws.col.Length -e _ws.col.Info -w xxFileNamexx.pcap > xxFileNamexx.csv
如果我手动运行上述命令,它会输出两个文件 - .pcap文件和.csv文件。可以在wireshark中查看.pcap文件的内容,使用以下列在Excel中查看.csv文件。
否。时间源目标协议长度信息
我想每5分钟自动执行一次运行tshark命令的过程,并以与上面相同的输出格式输出新的.csv文件和.pcap文件。该文件应使用日期/时间命名。
到目前为止,我已经创建了下面的脚本,该脚本每5分钟成功运行一次tshark命令(持续时间为60秒)并输出一个文件(使用日期/时间作为名称),但是我的格式必须不正确,因为我无法打开它无论是wireshark还是excel。 当我尝试使用wireshark打开文件时,我得到任何错误说(vwr:无效数据长度6448693(运行超过记录的末尾)),当我在excel中打开文件时,看起来所有数据都在一行上,和列未正确创建。
如何修改脚本以将tshark捕获输出到.csv文件,其格式与手动运行命令时的格式相同。如果可能,也可以使用.pcap文件
import os
from datetime import datetime
import schedule
import subprocess
import time
class Sniff:
def __init__(self, secs):
self.dir = self.create_dir()
self.filename = self.dir +"/" +str(datetime.now())
print("Capture Traffic")
self.sniff(secs)
def sniff(self, sniff_secs):
command = "tshark -T fields -e _ws.col.No -e _ws.col.Time -e _ws.col.Source -e _ws.col.Destination -e _ws.col.Protocol -e _ws.col.Length -e _ws.col.Info"
start = datetime.now()
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
with open(self.filename,'a+') as f:
for line in iter(p.stdout.readline, ','):
f.write(str(line))
if (datetime.now() - start).total_seconds() >= sniff_secs:
p.kill()
return
return
def create_dir(self):
dirname =str(datetime.now().date())
try:
os.mkdir(dirname)
except FileExistsError:
pass
return dirname
if __name__ =='__main__':
schedule.every(5).minutes.do(Sniff, secs=60 )
while True:
schedule.run_pending()
time.sleep(1)
答案 0 :(得分:0)
您可以尝试使用python的csv模块生成逗号分隔文件。
初始化缓冲区和csv writer:
def initialize_data_buffer():
buffer = io.BytesIO()
filename = [filename]
wr = csv.writer(buffer, delimiter=',')
csv_header='column_name_a,column_name_b,column_name_c,......'
header = [i.strip() for i in csv_header.split(',')]
wr.writerow(header)
现在您可以使用csv writer的writerow方法使用以下方法将数据写入缓冲区:
wr.writerow([List of data to be written])
完成后,将缓冲区写入文件:
data_to_write = buffer.getvalue()
with open(self.filename, "w") as logfile:
logfile.write(data_to_write)
logfile.close()
拥有它:
import os
from datetime import datetime
import schedule
import subprocess
import time
import csv
import io
import sys
class Sniff:
def __init__(self, secs):
self.dir = self.create_dir()
self.filename = self.dir +"/" +str(datetime.now())
self.buffer = io.StringIO()
self.wr = csv.writer(self.buffer, delimiter=',')
csv_header='column_name_a,column_name_b,column_name_c,......'
header = [i.strip() for i in csv_header.split(',')]
self.wr.writerow(header)
print("Capture Traffic")
self.sniff(secs)
self.write_buffer_into_the_file()
def sniff(self, sniff_secs):
command = "tshark -T fields -e _ws.col.No -e _ws.col.Time -e _ws.col.Source -e _ws.col.Destination -e _ws.col.Protocol -e _ws.col.Length -e _ws.col.Info"
start = datetime.now()
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
with open(self.filename,'a+') as f:
for line in iter(p.stdout.readline, ','):
#Make sure that line is a list of data you want to write
print line
list_of_data = [line.strip() for line in line.split(',')]
self.write_in_buffer(list_of_data)
if (datetime.now() - start).total_seconds() >= sniff_secs:
p.kill()
return
return
def write_in_buffer(self,list_of_data):
self.wr.writerow(list_of_data)
def write_buffer_into_the_file(self):
data_to_write = self.buffer.getvalue()
with open(self.filename, "w") as logfile:
logfile.write(data_to_write)
def create_dir(self):
dirname =str(datetime.now().date())
try:
os.mkdir(dirname)
except FileExistsError:
pass
return dirname
if __name__ =='__main__':
schedule.every(5).minutes.do(Sniff, secs=60 )
while True:
schedule.run_pending()
time.sleep(1)