我不认为这是标准的json
格式。我没有看到我在其他例子中看到过的冒号。例如,您可以看到第一行显示佛罗里达(佛罗里达州)。所以我原本以为我会看到类似'State'的东西:'FL'。这里没有显示标题,但是当我查看网页结果时,会显示标题。是否需要冒号才能有效地解析它?最后,我想将其转换为CSV格式,以便我可以将其加载到Excel中。以下是该文件的示例。
{
'aaData':[
[
[
'99.04,99.08,99.08,99.12,99.08,99.11,99.12,99.13,99.11,99.11,99.12,99.13,99.11,99.10,99.09,99.06,99.09,99.11,99.09,99.13,99.11,99.07,98.96,98.38,98.66,99.11,99.10,98.70',
'2961916',
'4'
],
'**FL**',
'Atmore',
'WALNUT HILL',
'JAKES ROAD',
'WLHLFL',
'EquipmentType',
'.',
'1-1-2-1',
'.',
'2015-09-10',
'2015-10-07',
None,
'6.14',
'99.13',
'908',
'345',
'448',
'971',
'24.00',
'2672',
'0',
'0',
'0',
'Critical',
'2672',
'2015-10-09 12:57:50'
],
[
[
'98.31,98.06,97.55,96.10,97.62,98.20,97.18,97.26,97.74,96.94,97.61,98.03,98.66,97.69,98.17,97.61,98.23,96.98,97.97,97.84,97.62,98.16,97.05,98.05,98.11,97.40,96.72,95.87',
'3133016',
'4'
],
'FL',
'Atmore',
'MOLINO',
'QUINTETTE',
'MOLNFL',
'EquipmentType',
'.',
'1-1-2-1',
'.',
'2015-09-10',
'2015-10-07',
None,
'3.07',
'98.66',
'1017',
'338',
'416',
'916',
'31.39',
'2687',
'0',
'0',
'0',
'Critical',
'2687',
2015-10 -09 12:57:50
]
]
当前代码
from urllib.request import urlopen
import json
url_fl = 'http://corporate.server.private/server/scripts/other /get_json_bw_report.php?tType=Port&sList=&bList=%274%27,%273%27,%272%27,%271%27&stList=%27FL%27'
str_response = urlopen(url_fl).read().decode('utf-8')
obj = json.loads(str_response)
print(obj)
修改
添加此代码可以获取我想要提取的数据:
list1 = obj['aaData'][0][1:]
print(list1)
list2 = obj['aaData'][1][1:]
print(list2)
list3 = obj['aaData'][2][1:]
print(list3)
结果:
['FL', 'Atmore', 'WALNUT HILL', 'JAKES ROAD', 'WLHLFL', 'EquipmentType', '.', '1-1-2-1', '.', '2015-09-11', '2015-10-08', None, '6.14', '99.13', '916', '357', '430', '969', '24.00', '2672', '0', '0', '0', 'Critical', '2672', '2015-10-10 09:02:28']
['FL', 'Atmore', 'MOLINO', 'QUINTETTE', 'MOLNFL', 'EquipmentType', '.', '1-1-2-1', '.', '2015-09-11', '2015-10-08', None, '3.07', '98.66', '1027', '341', '412', '907', '31.39', '2687', '0', '0', '0', 'Critical', '2687', '2015-10-10 09:02:28']
['FL', 'Atmore', 'WALNUT HILL', 'BAY SPRINGS', 'WLHLFL', 'EquipmentType', '.', '1-1-2-1', '.', '2015-09-11', '2015-10-08', None, '6.14', '99.13', '1062', '428', '438', '760', '31.53', '2688', '0', '0', '0', 'Critical', '2688', '2015-10-10 09:02:28']
但是这些需要遍历文件并找到每个实例。模式为['aaData'][0][1:]
,['aaData'][1][1:]
,['aaData'][2][1:]
。该文件可以有很多这样的。如何迭代或循环遍历文件并打印其中的每一个?
编辑 - 有效的最终代码
from urllib.request import urlopen
import json
import csv
url_fl = 'http://company.server.org'
url_response = urlopen(url_fl).read().decode('utf-8')
obj = json.loads(url_response)
obj_parse = obj['aaData']
with open('test.csv', 'w', newline='') as fp:
data = csv.writer(fp, delimiter=',')
for row in obj_parse:
data.writerows([row[1:]])
答案 0 :(得分:0)
正如您现在已经意识到的那样,JSON是正确的,您在Python中正确解析它。现在您有一个要处理的数据结构。
在Python中,您可以使用for row in obj['aaData']:
print( row[1:] )
循环处理列表:
{{1}}
您需要使用csv
模块根据CSV代表正确编码您生成的数据结构(即文件中所需的数据列表)。
答案 1 :(得分:0)
from urllib.request import urlopen
import json
import csv
url_fl = 'http://company.server.org'
url_response = urlopen(url_fl).read().decode('utf-8')
obj = json.loads(url_response)
obj_parse = obj['aaData']
with open('test.csv', 'w', newline='') as fp:
data = csv.writer(fp, delimiter=',')
for row in obj_parse:
data.writerows([row[1:]])