我将网站日志保存为json,我想将它们加载到pandas中。 我有这种json结构,有多个嵌套数据:
{"settings":{"siteIdentifier":"site1"},
"event":{"name":"pageview",
"properties":[]},
"context":{"date":"Thu Dec 01 2016 01:00:08 GMT+0100 (CET)",
"location":{"hash":"",
"host":"aaa"},
"screen":{"availHeight":876,
"orientation":{"angle":0,
"type":"landscape-primary"}},
"navigator":{"appCodeName":"Mozilla",
"vendorSub":""},
"visitor":{"id": "unique_id"}},
"server":{"HTTP_COOKIE":"uid",
"date":"2016-12-01T00:00:09+00:00"}}
{"settings":{"siteIdentifier":"site2"},
"event":{"name":"pageview",
"properties":[]},
"context":{"date":"Thu Dec 01 2016 01:00:10 GMT+0100 (CET)",
"location":{"hash":"",
"host":"aaa"},
"screen":{"availHeight":852,
"orientation":{"angle":90,
"type":"landscape-primary"}},
"navigator":{"appCodeName":"Mozilla",
"vendorSub":""},
"visitor":{"id": "unique_id"}},
"server":{"HTTP_COOKIE":"uid",
"date":"2016-12-01T00:00:09+00:10"}}
目前唯一可行的解决方案是:
import pandas as pd
import json
from pandas.io.json import json_normalize
pd.set_option('expand_frame_repr', False)
pd.set_option('display.max_columns', 10)
pd.set_option("display.max_rows",30)
first = True
filename = "/path/to/file.json"
with open(filename, 'r') as f:
for line in f: # read line by line to retrieve only one json
data = json.loads(line) # convert single json from string to json
if first: # initialize the dataframe
df = json_normalize(data)
first = False
else: # add a row for each json
df=df.append(json_normalize(data)) #normalize to flatten the data
df.to_csv("2016-12-02.csv",index=False, encoding='utf-8')
我必须逐行阅读,因为我的jsons是一个接一个地粘贴而不是列在列表中。我的代码正在运行,但速度非常慢。 我该怎么做才能改善它?我使用熊猫是因为它看起来合适,但是如果还有另外一种方式,那就好了。
答案 0 :(得分:2)
您可以先将所有JSON对象放入一个可迭代的对象中:
with open(filename, 'r') as f:
data = [json.loads(line) for line in f]
df = json_normalize(data)
df.to_csv("2016-12-02.csv",index=False, encoding='utf-8')