通过逐行读取来合并多个文件?

时间:2016-04-28 00:35:52

标签: python

我有3个文件:

文件1:

    chrM    6423    5
    chrM    6432    4
    chrM    7575    1
    chrM    7670    1
    chrM    7933    1
    chrM    7984    1
    chrM    8123    1
    chrM    9944    1
    chrM    10434   1
    chrM    10998   13
    chrM    10999   19
    chrM    11024   17
    chrM    11025   29
    chrM    11117   21
    chrM    11118   42
    chr1    197095350   2
chr1    197103061   1
chr1    197103582   1
chr1    197103615   1
chr1    197103810   3
chr1    197103885   2
chr1    197104256   1
chr1    197107467   4
chr1    197107480   5
chr1    197107498   6
chr1    197107528   10
chr1    197107805   1
chr1    197107806   1
chr1    197107813   1
chr1    197107814   1
chr1    197107839   1
chr1    197107840   1
chr1    197107855   1
chr1    197107856   1
chr1    197107877   1
chr1    197107878   1
chr1    197111511   1
chr1    197120122   1
chr1    197125503   1
chr1    197126978   1
chr1    197127070   1
chr1    197127084   1
chr1    197129731   2
chr1    197129758   2
chr1    197129765   1
chr1    197167632   2
chr1    197167652   2
chr1    197167668   2
chr1    197167682   2
chr1    197181417   1
chr1    197181973   3
chr1    197181975   3
chr1    197192150   0

file2的:

  chrM  6423    5
    chrM    6432    4
    chrM    6582    1
    chrM    6640    1
    chrM    6643    1
    chrM    7140    1
    chrM    10998   7
    chrM    10999   8
    chrM    11024   10
    chrM    11025   13
    chrM    11117   12
    chrM    11118   33
    chr1    197095157   2
chr1    197095185   2
chr1    197098860   1
chr1    197105061   1
chr1    197107422   1
chr1    197107436   1
chr1    197107467   3
chr1    197107480   4
chr1    197107498   3
chr1    197107528   4
chr1    197107805   2
chr1    197107813   2
chr1    197107839   1
chr1    197108557   1
chr1    197108591   1
chr1    197108596   1
chr1    197108617   1
chr1    197108651   1
chr1    197139308   1
chr1    197139335   1
chr1    197143403   1
chr1    197143442   1
chr1    197145546   1
chr1    197148715   1
chr1    197148723   1
chr1    197148731   1
chr1    197148761   1
chr1    197153190   1
chr1    197166831   1
chr1    197166847   2
chr1    197166922   2
chr1    197166950   1
chr1    197166954   1
chr1    197167041   1
chr1    197167778   1
chr1    197167791   1
chr1    197167834   1
chr1    197167857   2
chr1    197167860   2
chr1    197167865   1
chr1    197167867   1
chr1    197167871   1
chr1    197167935   2
chr1    197167946   2
chr1    197167948   2
chr1    197167951   2
chr1    197167974   1
chr1    197167980   1
chr1    197168142   1
chr1    197168163   1
chr1    197168195   1
chr1    197168210   1
chr1    197169548   1
chr1    197169580   1
chr1    197169609   1
chr1    197183318   1
chr1    197183404   1
chr1    197184910   1
chr1    197184937   1
chr1    197186368   1
chr1    197191991   1
chr1    197192031   1
chr1    197192047   1
chr1    197192097   1
chr1    197192106   1
chr1    197192125   1
chr1    197192150   1

file3的:

    chrM    6423    2
    chrM    6432    1
    chrM    6766    1
    chrM    6785    1
    chrM    10075   1
    chrM    10084   1
    chrM    10998   7
    chrM    10999   8
    chrM    11024   7
    chrM    11025   14
    chrM    11117   8
chr1    197095943   1
chr1    197096144   1
chr1    197104061   1
chr1    197104257   1
chr1    197107805   2
chr1    197122470   1
chr1    197123085   1
chr1    197123093   1
chr1    197126978   1
chr1    197142562   1
chr1    197157076   1
chr1    197157101   2
chr1    197162035   4
chr1    197167431   1
chr1    197167470   1
chr1    197167535   1
chr1    197167652   1
chr1    197167668   1
chr1    197167682   1
chr1    197167715   1
chr1    197167734   1
chr1    197167755   1
chr1    197168107   2
chr1    197168113   2
chr1    197172198   1
chr1    197172211   1
chr1    197172221   1
chr1    197172271   1
chr1    197175787   1
chr1    197175806   1
chr1    197175822   1
chr1    197192150   0

生成的文件应该是这样的:

    6423    chrM    2   5   5
    6432    chrM    1   4   4
  6582  chrM    1
197093370   chr1    1
197093385   chr1    1
197094791   chr1    1
197094813   chr1    1
197094855   chr1    1
197094857   chr1    1
197095157   chr1    2
197095185   chr1    2
197095350   chr1    2
197095943   chr1    1
197096

现在我的代码工作正常。但是在while循环中有一个issu,几乎在合并文件的末尾合并了许多记录后,它停止在文件上写入并且只写了197096 ....并且在错误时停止了(最近的电话最后一次):   文件"",第4行,in IndexError:列表索引超出范围

我认为这个错误与while循环有关。我不知道它为什么会发生。我也在改变我的代码,你可以在下面看到:
看看她来了问题:你可以在结果文件中清楚地看到,在这种情况下发生了一些事情,从单个文件读取后代码无法读取所有文件中的常见值,并且在这种情况下它不会给出7575应该来自7140之后。

我有多个文件很大,我想逐行读取它们并将它们合并在一起,如果它们对于第2列都有相同的值,我使用了将所有第2列val的逻辑放在一起列表然后找到它们的最小值。从文件中写入最小值记录(第3列保存在mycover中)显示新文件的最小值。然后在my_newfile[]中跟踪读取的文件,以便从中读取下一行,并删除已写入文件的记录。

希望能够理解。我不知道如何重复这个过程,直到所有文件都到达结束,以便从所有文件中读取所有记录。我的代码如下:

    import sys
import glob
import errno
path = '*Sorted_Coverage.txt'   
filenames = glob.glob(path)  
files = [open(i, "r") for i in filenames]

p=1
mylist=[]
mychr=[]
mycover=[]
new_mychr=[]
new_mycover=[]
new_mylist=[]
myfile=[]
new_myfile=[]
ab=""
g=1
result_f = open('MERGING_water_onlyselected.txt', 'a')
for j in files: 
    line = j.readline()
    parts = line.split()
    mychr.append(parts[0])
    mycover.append(parts[2])
    mylist.append(parts[1])
    myfile.append(j)
mylist=map(int,mylist)
minval = min(mylist)
ind = [i for i, v in enumerate(mylist) if v == minval]
not_ind = [i for i, v in enumerate(mylist) if v != minval]
w=""
j=0
for j in xrange(len(ind)):  # writing  records to file with minimum value
    if(j==0):
        ab = (str(mylist[ind[j]])+'\t'+mychr[ind[j]]+'\t'+mycover[ind[j]])
    else:
        ab=ab+'\t'+mycover[ind[j]]

#smallest written on file

result_f.writelines(ab+'\n')
ab=""

for i in ind:
    new_myfile.append(myfile[i])

      #removing the records by index which have  been used from mylists .
for i in sorted(ind, reverse=True):
    del mylist[i]
    del mycover[i]
    del mychr[i]
    del myfile[i]


#how to iterate the following code from all records of all files till the end of each file
while(True):
    for i in xrange(len(new_myfile)):
        print len(new_myfile)       
        myfile.append(new_myfile[i])
        line = new_myfile[i].readline()
        parts = line.split()
        mychr.append(parts[0])
        mycover.append(parts[2])
        mylist.append(parts[1])
        new_myfile=[]
    mylist=map(int, mylist)
    minval = min(mylist)
    print minval
    print("list values:")
    print mylist
    ind = [i for i, v in enumerate(mylist) if v == minval]
    not_ind = [i for i, v in enumerate(mylist) if v !=  minval]
    k=0
    ab=""
    for j in xrange(len(ind)):  # writing  records to file with minimum value
        if(j==0):
            ab = (str(mylist[ind[j]])+'\t'+str(mychr[ind[j]])+'\t'+str(mycover[ind[j]]))
            k=k+1
        else:
            ab=ab+'\t'+str(mycover[ind[j]])
            k=k+1
    #smallest written on file
    result_f.writelines(ab+'\n')
    ab=""
    for i in ind:
        new_myfile.append(myfile[i])
      #removing the records by index which have  been used from mylists .
    for i in sorted(ind, reverse=True):
        del mylist[i]
        del mycover[i]
        del mychr[i]
        del myfile[i]
result_f.close()

我已经寻找了很多天的解决方案,但仍然找不到任何解决方案。我不知道这个代码是否可以进行更多改进,因为我对python来说还是一个新手。

如果有人可以帮助我,我将非常感激。

1 个答案:

答案 0 :(得分:1)

基本解决方案

这是一种非常简单的方法。我不知道它如何对大文件执行(请​​参阅下面的评论)。

我假设所有文件已经相对于第二列进行排序。另外,我假设第一列签名('chrM','chr1')对于第二列中的固定值保持不变(我将在下面将此列称为'id')。

算法很简单:

  1. 从每个文件中读取一行(我称之为读取行'项目')

  2. 选择一个'id',其中'id'最小(任何一个)并将其与'current_item'进行比较:

    如果两者具有相同的ID:组合它们 else:将'current_item'写入文件并将其替换为'item'

  3. 读取同一文件中的一行,因为“项目”已被读取(如果有任何行)

  4. 从1开始重复,直到读取所有文件中的所有行。

  5. import glob
    import numpy as np
    
    path = './file[0-9]*'
    filenames = glob.glob(path) 
    files = [open(i, "r") for i in filenames] 
    output_file = open('output_file', mode = 'a')
    
    # last_ids[i] = last id number read from files[i]
    # I choose np.array because of function np.argmin
    last_ids = np.ones(shape = len(files)) * np.inf
    last_items = [None] *len(files)
    
    # Note: When we hit EOF in a file, the corresponding entries from "files", "last_items", and "last_ids" will be deleted
    
    for i in range(len(files)):
        line = files[i].readline()
        if line:
            item = line.strip().split()
            last_ids[i] = int(item[1])
            last_items[i] = item
    
    # Find an item with the smallest id 
    pos = np.argmin(last_ids)
    current_item = last_items[pos]
    # Inverting positions, so that id is first
    current_item[0], current_item[1] = current_item[1], current_item[0]  
    
    while True:    
        # Read next item from the corresponding file
        line = files[pos].readline()
        if line:
            item = line.strip().split()
            last_ids[pos] = int(item[1])
            last_items[pos] = item
        else:
            # EOF in files[pos], so delete it from the lists
            files[pos].close()
            del(files[pos])
            del(last_items[pos])
            last_ids = np.delete(last_ids, pos)
            if last_ids.size == 0:
                # No more files to read from
                break 
    
        # Find an item with the smallest id 
        pos = np.argmin(last_ids)
        if last_items[pos][1] == current_item[0]:
            # combine:
            current_item.append(last_items[pos][2])
        else:
            # write current to file and replace:
            output_file.write(' '.join(current_item) + '\n')
            current_item = last_items[pos]
            current_item[0], current_item[1] = current_item[1], current_item[0]  
    
    # The last item to write:
    output_file.write(' '.join(current_item) + '\n')
    output_file.close()
    

    小文件解决方案:

    如果所有文件都足够小以适应内存,那么以下代码肯定会更短。它是否更快可能取决于数据。 (见下面的评论。)

    import glob 
    import pandas as pd
    
    path = './file[0-9]*'    
    filenames = glob.glob(path) 
    
    df_list = []
    # Read in all files and concatenate to a single data frame:
    for file in filenames:
        df_list.append(pd.read_csv(file, header = None, sep = '\s+'))    
    df = pd.concat(df_list)
    
    # changing type for convenience:
    df[2] = df[2].astype(str)
    # sorting here is not necessary:
    # df = df.sort_values(by = 1)
    
    df2 = df.groupby(by = 1).aggregate({0:'first', 2: lambda x: ' '.join(x)})
    df2.to_csv('output_file', header = None)
    # (Columns in 'output_file' are separated by commas. )
    

    评论

    我在1000-10000行的几个输入文件上测试了两个解决方案。通常,基本解决方案更快(有时是另一个解决方案的两倍)。但这取决于数据的结构。如果有许多重复的'id',那么大熊猫可能稍微有点优势(相当小的余地)。

    我认为这两种方法可以与pd.read_csv选项chunksizeiterator结合使用。这样我们就可以读入并操作更大的数据块(而不是单行)。但是现在我不确定它是否能带来更快的代码。

    如果失败(如果没有人找到更好的方法),您可以考虑在Amazon Web Services上运行map reduce算法。有一些工作要在开始时修复所有设置,但map-reduce算法对于这类问题非常简单。