有没有更快的方法在两个数组(Python)中找到匹配的功能?

时间:2016-06-20 18:52:12

标签: python optimization comparison bioinformatics

我正在尝试浏览一个文件中的每个功能(每行1个),并根据第二个文件中该行的一列找到所有匹配的功能。我有这个解决方案,它可以在小文件上做我想要的,但是在大文件上它很慢(我的文件有> 20,000,000行)。 Here's a sample of the two input files.

我的(慢)代码:

FEATUREFILE = 'S2_STARRseq_rep1_vsControl_peaks.bed'
CONSERVATIONFILEDIR = './conservation/'
with open(str(FEATUREFILE),'r') as peakFile, open('featureConservation.td',"w+") as outfile:
for line in peakFile.readlines():
    chrom = line.split('\t')[0]
    startPos = int(line.split('\t')[1])
    endPos = int(line.split('\t')[2])
    peakName = line.split('\t')[3]
    enrichVal = float(line.split('\t')[4])

    #Reject negative peak starts, if they exist (sometimes this can happen w/ MACS)
    if startPos > 0:
        with open(str(CONSERVATIONFILEDIR) + str(chrom)+'.bed','r') as conservationFile:
            cumulConserv = 0.
            n = 0
            for conservLine in conservationFile.readlines():
                position = int(conservLine.split('\t')[1])
                conservScore = float(conservLine.split('\t')[3])
                if position >= startPos and position <= endPos:
                    cumulConserv += conservScore
                    n+=1
        featureConservation = cumulConserv/(n)
        outfile.write(str(chrom) + '\t' + str(startPos) + '\t' + str(endPos) + '\t' + str(peakName) + '\t' + str(enrichVal) + '\t' + str(featureConservation) + '\n')

3 个答案:

答案 0 :(得分:1)

出于我的目的,最好的解决方案似乎是重写上面的pandas代码。以下是一些非常大的文件对我有用的东西:

from __future__ import division
import pandas as pd

FEATUREFILE = 'S2_STARRseq_rep1_vsControl_peaks.bed'
CONSERVATIONFILEDIR = './conservation/'

peakDF = pd.read_csv(str(FEATUREFILE), sep = '\t', header=None, names=['chrom','start','end','name','enrichmentVal'])
#Reject negative peak starts, if they exist (sometimes this can happen w/ MACS)
peakDF.drop(peakDF[peakDF.start <= 0].index, inplace=True)
peakDF.reset_index(inplace=True)
peakDF.drop('index', axis=1, inplace=True)
peakDF['conservation'] = 1.0 #placeholder

chromNames = peakDF.chrom.unique()

for chromosome in chromNames: 
    chromSubset = peakDF[peakDF.chrom == str(chromosome)]
    chromDF = pd.read_csv(str(CONSERVATIONFILEDIR) + str(chromosome)+'.bed', sep='\t', header=None, names=['chrom','start','end','conserveScore'])

for i in xrange(0,len(chromSubset.index)):
    x = chromDF[chromDF.start >= chromSubset['start'][chromSubset.index[i]]]
    featureSubset = x[x.start < chromSubset['end'][chromSubset.index[i]]]
    x=None
    featureConservation = float(sum(featureSubset.conserveScore)/(chromSubset['end'][chromSubset.index[i]]-chromSubset['start'][chromSubset.index[i]]))
    peakDF.set_value(chromSubset.index[i],'conservation',featureConservation)
    featureSubset=None

 peakDF.to_csv("featureConservation.td", sep = '\t')

答案 1 :(得分:0)

首先,每当您从conservationFile读取一行时,就会循环遍历所有peakFile,因此请在break之后的n+=1中添加In [13]: x = numpy.array([[1, 2, 3], ....: [4, 5, 6]]) In [14]: x[:, :2].reshape([4]).base is x Out[14]: False 应该有所帮助。假设只有一个匹配。

另一种选择是尝试使用可能有助于缓冲的mmap

答案 2 :(得分:0)

为此制作了Bedtools,特别是intersect功能:

http://bedtools.readthedocs.io/en/latest/content/tools/intersect.html