如何处理大量的JSON词典?

时间:2015-06-12 17:37:35

标签: python json

我有一个包含JSON词典流的文件,如下所示:

for d in getobjects(f):
  handle_dict(d)

它还包括嵌套字典,看起来我不能依赖换行符作为分隔符。我需要一个可以像这样使用的解析器:

brand

关键是,如果迭代仅发生在根级别,那将是完美的。是否有一个Python解析器可以处理所有JSON的怪癖?我感兴趣的是一个可以处理不适合RAM的文件的解决方案。

3 个答案:

答案 0 :(得分:5)

我认为JSONDecoder.raw_decode可能正是您所寻找的。你可能不得不做一些字符串格式化,以获得完美的格式,取决于换行等等,但通过一些工作,你可能会得到一些工作。见这个例子。

import json
jstring = '{"menu": "a"}{"c": []}{"d": [3, 2]}{"e": "}"}'
substr = jstring
decoder = json.JSONDecoder()

while len(substr) > 0:
    data,index = decoder.raw_decode(substr)
    print data
    substr = substr[index:]

提供输出:

{u'menu': u'a'}
{u'c': []}
{u'd': [3, 2]}
{u'e': u'}'}

答案 1 :(得分:2)

在这里:基于@Brien

答案的经过测试的解决方案

这应该能够处理任意大小的输入文件。它是一个生成器,因此当它从JSON输入文件中解析它时,它一次产生一个字典对象。

如果您将其作为独立运行,它将运行三个测试用例。 (在if __name__ == "__main__"区块中)

当然,要从标准输入读取,只需将sys.stdin作为输入文件参数传递。

import json


_DECODER = json.JSONDecoder()

_DEFAULT_CHUNK_SIZE = 4096
_MB = (1024 * 1024)
_LARGEST_JSON_OBJECT_ACCEPTED = 16 * _MB  # default to 16 megabytes

def json_objects_from_file(input_file,
            chunk_size=_DEFAULT_CHUNK_SIZE,
            max_size=_LARGEST_JSON_OBJECT_ACCEPTED):
    """
    Read an input file, and yield up each JSON object parsed from the file.

    Allocates minimal memory so should be suitable for large input files.
    """
    buf = ''
    while True:
        temp = input_file.read(chunk_size)
        if not temp:
            break

        # Accumulate more input to the buffer.
        #
        # The decoder is confused by leading white space before an object.
        # So, strip any leading white space if any.
        buf = (buf + temp).lstrip()
        while True:
            try:
                # Try to decode a JSON object.
                x, i = _DECODER.raw_decode(buf)
                # If we got back a dict, we got a whole JSON object.  Yield it.
                if type(x) == dict:
                    # First, chop out the JSON from the buffer.
                    # Also strip any leading white space if any.
                    buf = buf[i:].lstrip()
                    yield x
            except ValueError:
                # Either the input is garbage or we got a partial JSON object.
                # If it's a partial, maybe appending more input will finish it,
                # so catch the error and keep handling input lines.

                # Note that if you feed in a huge file full of garbage, this will grow
                # very large.  Blow up before reading an excessive amount of data.

                if len(buf) >= max_size:
                    raise ValueError("either bad input or too-large JSON object.")
                break
    buf = buf.strip()
    if buf:
        if len(buf) > 70:
            buf = buf[:70] + '...'
        raise ValueError('leftover stuff from input: "{}"'.format(buf))

if __name__ == "__main__":
    from StringIO import StringIO

    jstring = '{"menu":\n"a"}{"c": []\n}\n{\n"d": [3,\n 2]}{\n"e":\n "}"}'
    f = StringIO(jstring)
    correct = [{u'menu': u'a'}, {u'c': []}, {u'd': [3, 2]}, {u'e': u'}'}]

    result = list(json_objects_from_file(f, chunk_size=3))
    assert result == correct

    f = StringIO(' ' * (17 * _MB))
    correct = []

    result = list(json_objects_from_file(f, chunk_size=_MB))
    assert result == correct

    f = StringIO('x' * (17 * _MB))
    correct = "ok"

    try:
        result = list(json_objects_from_file(f, chunk_size=_MB))
    except ValueError:
        result = correct
    assert result == correct

答案 2 :(得分:0)

这是一个部分解决方案,但随着输入的进行,它会一直在减速:

#!/usr/bin/env pypy

import json
import cStringIO
import sys

def main():
    BUFSIZE = 10240
    f = sys.stdin
    decoder = json.JSONDecoder()
    io = cStringIO.StringIO()

    do_continue = True
    while True:
        read = f.read(BUFSIZE)
        if len(read) < BUFSIZE:
            do_continue = False
        io.write(read)
        try:
            data, offset = decoder.raw_decode(io.getvalue())
            print(data)
            rest = io.getvalue()[offset:]
            if rest.startswith('\n'):
                rest = rest[1:]
            io = cStringIO.StringIO()
            io.write(rest)
        except ValueError, e:
            #print(e)
            #print(repr(io.getvalue()))
            continue
        if not do_continue:
            break

if __name__ == '__main__':
    main()