将许多文件中的json对象逐行组合到一个文件中

时间:2019-03-23 03:43:46

标签: python json

我的目录充满了json个文件,如下所示:

json/
    checkpoint_01.json
    checkpoint_02.json
    ...
    checkpoint_100.json

每个文件都有数千个json对象,逐行转储。

{"playlist_id": "37i9dQZF1DZ06evO2dqn7O", "user_id": "spotify", "sentence": ["Lil Wayne", "Wiz Khalifa", "Imagine Dragons", "Logic", "Ty Dolla $ign", "X Ambassadors", "Machine Gun Kelly", "X Ambassadors", "Bebe Rexha", "X Ambassadors", "Jamie N Commons", "X Ambassadors", "Eminem", "X Ambassadors", "Jamie N Commons", "Skylar Grey", "X Ambassadors", "Zedd", "Logic", "X Ambassadors", "Imagine Dragons", "X Ambassadors", "Jamie N Commons", "A$AP Ferg", "X Ambassadors", "Tom Morello", "X Ambassadors", "The Knocks", "X Ambassadors"]}
{"playlist_id": "37i9dQZF1DZ06evO1A0kr6", "user_id": "spotify", "sentence": ["RY X", "ODESZA", "RY X", "Thomas Jack", "RY X", "Rhye", "RY X"]} 
(...)

我知道我可以将所有文件合并为一个,

def combine():
    read_files = glob.glob("*.json")
    with open("merged_playilsts.json", "wb") as outfile:
        outfile.write('[{}]'.format(
            ','.join([open(f, "rb").read() for f in read_files])))

但是最后,我需要使用以下脚本来解析一个大的json文件:

  

parser.py

"""
Passes extraction output into `word2vec`
and prints results as JSON.
"""    
from __future__ import absolute_import, unicode_literals
import json
import click    
from numpy import array as np_array    
import gensim

class LineGenerator(object):
    """Reads a sentence file, yields numpy array-wrapped sentences
    """

    def __init__(self, fh):
        self.fh = fh

    def __iter__(self):
        for line in self.fh.readlines():
            yield np_array(json.loads(line)['sentence'])


def serialize_rankings(rankings):
    """Returns a JSON-encoded object representing word2vec's
    similarity output.
    """  

    return json.dumps([
        {'artist': artist, 'rel': rel}
        for (artist, rel)
        in rankings
    ])

@click.command()
@click.option('-i', 'input_file', type=click.File('r', encoding='utf-8'),
              required=True)
@click.option('-t', 'term', required=True)
@click.option('--min-count', type=click.INT, default=5)
@click.option('-w', 'workers', type=click.INT, default=4)
def cli(input_file, term, min_count, workers):
    # create word2vec
    model = gensim.models.Word2Vec(min_count=min_count, workers=workers)
    model.build_vocab(LineGenerator(input_file))

    try:
        similar = model.most_similar(term)
        click.echo( serialize_rankings(similar) )
    except KeyError:
        # really wish this was a more descriptive error
        exit('Could not parse input: {}'.format(exc))

if __name__ == '__main__':
    cli()

问题:

那么,如何将json文件夹中的所有json/个对象合并到一个文件中,以每行一个json个对象结尾?

注意:这里的内存是个问题,因为所有文件的大小都为4 GB。

1 个答案:

答案 0 :(得分:0)

如果存在内存问题,您很可能希望使用生成器按需加载每一行。以下解决方案假定使用Python 3:

# get a list of file paths, you can do this via os.listdir or glob.glob... however you want.
my_filenames = [...]

def stream_lines(filenames):
    for name in filenames:
        with open(name) as f:
            yield from f

lines = stream_lines(my_filenames)

def stream_json_objects_while_ignoring_errors(lines):
    for line in lines:
        try:
            yield json.loads(line)
        except Exception as e:
            print(“ignoring invalid JSON”)

json_objects = stream_json_objects_while_ignoring_errors(lines)

for obj in json_objects:
    # now you can loop over the json objects without reading all the files into memory at once
    # example:
    print(obj["sentence"])

请注意,为简单起见,我省略了一些细节,例如错误处理和处理空行或文件打开失败。