我有一个包含JSON对象数组的文件。该文件超过1GB,因此我无法一次将其全部加载到内存中。我需要解析每个单独的对象。我尝试使用ijson
,但是它将整个数组作为一个对象加载,实际上与简单的json.load()
会做同样的事情。
还有另一种方法吗?
编辑:没关系,只需使用ijson.items()
并将prefix参数设置为"item"
。
答案 0 :(得分:3)
您可以一次解析JSON文件以找到每个1级分隔符的位置,即作为顶级对象一部分的逗号,然后将文件划分为这些位置指示的部分。例如:
{"a": [1, 2, 3], "b": "Hello, World!", "c": {"d": 4, "e": 5}}
^ ^ ^ ^ ^
| | | | |
level-2 | quoted | level-2
| |
level-1 level-1
在这里我们要找到1级逗号,它们分隔顶级对象所包含的对象。我们可以使用一个生成器来解析JSON流,并跟踪嵌套对象的降级和退出。当遇到未引用的1级逗号时,它会产生相应的位置:
def find_sep_pos(stream, *, sep=','):
level = 0
quoted = False # handling strings in the json
backslash = False # handling quoted quotes
for pos, char in enumerate(stream):
if backslash:
backslash = False
elif char in '{[':
level += 1
elif char in ']}':
level -= 1
elif char == '"':
quoted = not quoted
elif char == '\\':
backslash = True
elif char == sep and not quoted and level == 1:
yield pos
用于上面的示例数据,将得到list(find_sep_pos(example)) == [15, 37]
。
然后,我们可以将文件划分为与分隔符位置相对应的部分,并通过json.loads
分别加载每个部分:
import itertools as it
import json
with open('example.json') as fh:
# Iterating over `fh` yields lines, so we chain them in order to get characters.
sep_pos = tuple(find_sep_pos(it.chain.from_iterable(fh)))
fh.seek(0) # reset to the beginning of the file
stream = it.chain.from_iterable(fh)
opening_bracket = next(stream)
closing_bracket = dict(('{}', '[]'))[opening_bracket]
offset = 1 # the bracket we just consumed adds an offset of 1
for pos in sep_pos:
json_str = (
opening_bracket
+ ''.join(it.islice(stream, pos - offset))
+ closing_bracket
)
obj = json.loads(json_str) # this is your object
next(stream) # step over the separator
offset = pos + 1 # adjust where we are in the stream right now
print(obj)
# The last object still remains in the stream, so we load it here.
obj = json.loads(opening_bracket + ''.join(stream))
print(obj)
答案 1 :(得分:0)