合并具有类似名称python

时间:2015-11-20 02:02:31

标签: python regex csv pandas data-cleansing

概要

如果目录中包含以模式Prefix-Year.csv命名的CSV文件,请创建一组名为Prefix-aggregate.csv的新CSV文件,其中每个聚合文件都是具有相同前缀的所有CSV文件的组合。

解释

我有一个目录,其中包含以此模式命名的5,500个CSV文件:Prefix-Year.csv。例如:

18394-1999.csv
   . . .       //consecutive years
18394-2014.csv
18395-1999.csv //next location

我想将具有公共前缀的文件组合并组合到名为Prefix-aggregate.csv的文件中。

2 个答案:

答案 0 :(得分:0)

这个怎么样:

import os
import pandas as pd

root, dirs, files = next(os.walk('data_dir'))

with open('18394_aggregate.csv', 'a') as outfile:
    for infile in files:
        if infile.startswith('18394') and infile.endswith('.csv'):
            df = pd.read_csv(os.path.join(root, infile), header=False)
            df.to_csv(outfile, index=False, header=False)

答案 1 :(得分:0)

您问题的解决方案是下面的find_filesets()方法。我已经基于MaxNoe's answer包含了CSV合并方法。

#!/usr/bin/env python

import glob
import random
import os
import pandas

def rm_minus_rf(dirname):
    for r,d,f in os.walk(dirname):
        for files in f:
            os.remove(os.path.join(r, files))
        os.removedirs(r)

def create_testfiles(path):
    rm_minus_rf(path)
    os.mkdir(path)

    random.seed()
    for i in range(10):
        n = random.randint(10000,99999)
        for j in range(random.randint(0,20)):
            # year may repeat, doesn't matter
            year = 2015 - random.randint(0,20)
            with open("{}/{}-{}.csv".format(path, n, year), "w"):
                pass

def find_filesets(path="."):
    csv_files = {}
    for name in glob.glob("{}/*-*.csv".format(path)):
        # there's almost certainly a better way to do this
        key = os.path.splitext(os.path.basename(name))[0].split('-')[0]
        csv_files.setdefault(key, []).append(name)

    for key,filelist in csv_files.items(): 
        print key, filelist
        # do something with filelist
        create_merged_csv(key, filelist)

def create_merged_csv(key, filelist):
    with open('{}-aggregate.csv'.format(key), 'w+b') as outfile:
        for filename in filelist:
            df = pandas.read_csv(filename, header=False)
            df.to_csv(outfile, index=False, header=False)

TEST_DIR_NAME="testfiles"
create_testfiles(TEST_DIR_NAME)
find_filesets(TEST_DIR_NAME)