我有一个http请求日志的数据框。唯一相关的列是我尝试解析的userAgent列。我正在使用ua_parser。这会将每个userAgent转换为嵌套字典,如下所示:
>>> from ua_parser import user_agent_parser
>>> user_agent_parser.Parse('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36')
{
'device': {'brand': None,
'model': None,
'family': 'Other'},
'os': {'major': '10',
'patch_minor': None,
'minor': '10',
'family': 'Mac OS X',
'patch': '5'},
'user_agent': {'major': '55',
'minor': '0',
'family': 'Chrome',
'patch': '2883'},
'string': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'
}
我尝试使用user_agent_parser的结果在我的日志数据框架上创建4个额外的列。我喜欢device_brand,device_model,os_family和user_agent_family列。
不幸的是,当我将它存储为numpy数组时,我无法访问字典索引:
>>> parsed_ua = logs['userAgent'].apply(user_agent_parser.Parse)
>>> logs['device_brand'] = parsed_ua['device']['brand']
KeyError: 'device'
我尝试将其转换为数据框,因此我可以将parsed_ua与日志合并。不幸的是,这会将每个字典写入单个列
>>> pd.DataFrame(parsed_ua)
userAgent
0 {u'device': {u'brand': None, u'model': None, u...
1 {u'device': {u'brand': None, u'model': None, u...
2 {u'device': {u'brand': None, u'model': None, u...
3 {u'device': {u'brand': None, u'model': None, u...
4 {u'device': {u'brand': None, u'model': None, u...
如何解析userAgent列并将结果写入多列?
答案 0 :(得分:2)
您可以使用json_normalize()方法:
In [146]: pd.io.json.json_normalize(parsed_ua)
Out[146]:
device.brand device.family device.model os.family os.major os.minor \
0 None Other None Mac OS X 10 10
os.patch os.patch_minor string \
0 5 None Mozilla/5.0 (Macintosh; Intel Mac OS...
user_agent.family user_agent.major user_agent.minor user_agent.patch
0 Chrome 55 0 2883
答案 1 :(得分:0)
除了您已完成的工作外,您还可以使用Series' apply的lambda:
ua = logs['userAgent'].apply(lambda ua: user_agent_parser.Parse(ua))
logs['device_brand'] = ua.apply(lambda x: x['device']['brand'])
logs['device_model'] = ua.apply(lambda x: x['device']['model'])
logs['os_family'] = ua.apply(lambda x: x['os']['family'])
logs['user_agent_family'] = ua.apply(lambda x: x['user_agent']['family'])