我有一个tshark的pcap文件,其中包含我想要分析的数据。我想分析它并导出到CSV或xls文件。在tshark documentation我可以看到,我可以使用-z
选项和正确的参数,或-T
与-E
和-e
一起使用。我在Debian机器上使用python 3.6。目前,我的命令如下所示:
command="tshark -q -o tcp.relative_sequence_numbers:false -o tcp.analyze_sequence_numbers:false " \
"-o tcp.track_bytes_in_flight:false -Q -l -z diameter,avp,272,Session-Id,Origin-Host," \
"Origin-Realm,Destination-Realm,Auth-Application-Id,Service-Context-Id,CC-Request-Type,CC-Request-Number," \
"Subscription-Id,CC-Session-Failover,Destination-Host,User-Name,Origin-State-Id," \
"Multiple-Services-Credit-Control,Requested-Service-Unit,Used-Service-Unit,SN-Total-Used-Service-Unit," \
"SN-Remaining-Service-Unit,Service-Identifier,Rating-Group,User-Equipment-Info,Service-Information," \
"Route-Record,Credit-Control-Failure-Handling -r {}".format(args.input_file)
后来我用pandas数据帧处理它,如下所示:
# loops adding TCP and/or UDP ports to scan traffic from
if args.tcp:
for port in args.tcp:
command += " -d tcp.port=={},diameter".format(port)
if args.udp:
for port in args.udp:
command += " -d udp.port=={},diameter".format(port)
# calling subprocess with output redirection to task variable
task = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
# a loop adding new data dictionaries to data_list
for line in task.stdout:
line = re.sub(r"'", "", line.decode("utf-8")) # firstly, decode byte string and get rid of '
# secondly, split string every whitespace or = and obtain dictionary-like list of keys, values
line = re.split(r"\s|=", line)
# convert obtained list to ordered dictionary to preserve column order
# transform list to dictionary so that each i item is dictionary key and i+1 item is it's value
dict = OrderedDict(line[i:i+2] for i in range(0, len(line)-2, 2))
data_list.append(dict)
# remove last 4 dictionaries (last 4 lines of task.stdout)
data_list = data_list[:-4]
df = pd.DataFrame(data_list).fillna("-") # create data frame from list of dicts and fill each NaN with "-"
df.to_excel("{}.xls".format(args.output_file), index=False)
print("Please remember that 'frame' column may not correspond to row index!")
当我打开输出文件时,我可以看到它工作正常,除了在例如CC-Request-Number
我有数字值而不是字符串表示,例如在Wireshark我有这样的数据:
在CC-Request-Number
列的输出excel文件中,我可以在与此数据包对应的行中看到3
,而不是TERMINATION-REQUEST
。
我的问题是:如何使用-z
选项将此数字转换为字符串表示形式,或者(我可以从网上看到的内容中猜出)如何获取上述字段?使用-T
和-e
命令使用它们的值?我用tshark -G
列出了所有可用字段,但是它们太多了,我想不出任何合理的方法来找到我想要的字段。
答案 0 :(得分:0)
奇怪的是,对于-T fields
和-e
,tshark始终打印数字表示,但使用“自定义字段”输出格式,它会打印文本表示。好消息是自定义字段模式实际上比-T fields
模式快3倍。我知道无法控制自定义字段之间的单独字符的坏消息,所以如果您的字段内容可能包含空格,它似乎无法使用。
代替-z
,试试这个:
-o column.format:'"time", "%t", "type", "%Cus:diameter.CC-Request-Number"'
答案 1 :(得分:0)
感谢John Zwick的建议this answer以及The ElementTree XML API上的Python文档,我实现了下面的代码(我从官方Wireshark Github存储库下载了dictionary.xml和chargecontrol.xml):< / p>
chargecontrol_tree = ET.parse("chargecontrol.xml")
dictionary_tree = ET.parse("dictionary.xml")
chargecontrol_root = chargecontrol_tree.getroot()
dictionary_root = dictionary_tree.getroot()
# list that will contain data dictionaries
data_list = []
# base command
command = "tshark -q -o tcp.relative_sequence_numbers:false -o tcp.analyze_sequence_numbers:false " \
"-o tcp.track_bytes_in_flight:false -Q -l -z diameter,avp,272,Session-Id,Origin-Host," \
"Origin-Realm,Destination-Realm,Auth-Application-Id,Service-Context-Id,CC-Request-Type,CC-Request-Number," \
"Subscription-Id-Data,Subscription-Id-Type,CC-Session-Failover,Destination-Host,User-Name,Origin-State-Id," \
"Requested-Service-Unit,Used-Service-Unit,SN-Total-Used-Service-Unit," \
"SN-Remaining-Service-Unit,Service-Identifier,Rating-Group,User-Equipment-Info,Service-Information," \
"Route-Record,Credit-Control-Failure-Handling -r {}".format(args.input_file)
# loops adding tcp and/or udp ports to scan traffic from
if args.tcp:
for port in args.tcp:
command += " -d tcp.port=={},diameter".format(port)
if args.udp:
for port in args.udp:
command += " -d udp.port=={},diameter".format(port)
# calling subprocess with output redirection to task variable
task = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
# a loop adding new data dictionaries to data_list
for line in task.stdout:
line = re.sub(r"'", "", line.decode("utf-8")) # firstly, decode byte string and get rid of '
# secondly, split string every whitespace or = and obtain dictionary-like list of keys, values
line = re.split(r"\s|=", line)
# convert obtained list to ordered dictionary to preserve column order
# transform list to dictionary so that each i item is dictionary key and i+1 item is it's value
dict = OrderedDict(line[i:i+2] for i in range(0, len(line)-2, 2))
data_list.append(dict)
# remove last 4 dictionaries (last 4 lines of task.stdout)
data_list = data_list[:-4]
df = pd.DataFrame(data_list).fillna("-") # create data frame from list of dicts and fill each NaN with "-"
# values taken from official wireshark repository
# https://github.com/boundary/wireshark/blob/master/diameter/dictionary.xml
# https://github.com/wireshark/wireshark/blob/2832f4e97d77324b4e46aac40dae0ce898ae559d/diameter/chargecontrol.xml
df["Auth-Application-Id"] = df["Auth-Application-Id"].map({node.attrib["code"]:node.attrib["name"] for node in
dictionary_root.findall(".//*[@name='Auth-Application-Id']/enum")})
# list of columns that values of have to be substituted
for col in ["CC-Request-Type", "CC-Session-Failover", "Credit-Control-Failure-Handling", "Subscription-Id-Type"]:
df[col] = df[col].map({node.attrib["code"]: node.attrib["name"] for node in
chargecontrol_root.findall((".//*[@name='{}']/enum").format(col))})
df.to_excel("{}.xls".format(args.output_file), index=False)
print("Please remember that 'frame' column may not correspond to row index!")