从大文件中检索行的更有效方法

时间:2018-03-25 09:10:15

标签: python bioinformatics biopython dna-sequence

我有一个ID为1,786,916条记录的数据文件,我想从另一个包含大约480万条记录的文件中检索相应的记录(在这种情况下是DNA序列,但基本上只是纯文本)。我写了一个python脚本来执行此操作,但它需要很长时间才能运行(第3天,它只完成了12%)。因为我是python的相对新手,所以我想知道是否有人有建议让它更快。

以下是带有ID的数据文件示例(示例中的ID为ANICH889-10):

ANICH889-10 k__Animalia; p__Arthropoda; c__Insecta; o__Lepidoptera; f__Psychidae; g__Ardiosteres; s__Ardiosteres sp. ANIC9
ARONW984-15 k__Animalia; p__Arthropoda; c__Arachnida; o__Araneae; f__Clubionidae; g__Clubiona; s__Clubiona abboti

以下是包含记录的第二个文件的示例:

>ASHYE2081-10|Creagrura nigripesDHJ01|COI-5P|HM420985
ATTTTATACTTTTTATTAGGAATATGATCAGGAATAATTGGTCTTTCAATAAGAATCATTATCCGTATTGAATTAAGAAATCCAGGATCTATTATTAATAATGACCAAATTTATAATTCATTAATTACTATACACGCACTATTAATAATTTTTTTTTTAGTTATACCTGTAATAATTGGAGGATTTGGAAATTGATTAATTCCTATTATAATTGGAGCCCCAGATATAGCATTTCCACGAATAAACAATCTTAGATTTTGATTATTAATCCCATCAATTTTCATATTAATATTAAGATCAATTACTAATCAAGGTGTAGGAACAGGATGAACAATATATCCCCCATTATCATTAAATATAAATCAAGAAGGAATATCAATAGATATATCAATTTTTTCTTTACATTTAGCAGGAATATCCTCAATTTTAGGATCAATTAATTTCATTTCAACTATTTTAAATATAAAATTTATTAATTCTAATTATGATCAATTAACTTTATTTTCATGATCAATTCTAATTACTACTATTTTATTATTACTAGCAGTCCCTGTATTAGCAGGAGCAATTACTATAATTTTAACTGATCGAAATTTAAATACTTCTTTTTTTGATCCTAGAGGAGGAGGAGATCCAATTT-----------------
>BCISA145-10|Hemiptera|COI-5P
AACTCTATACTTTTTACTAGGATCCTGGGCAGGAATAGTAGGAACATCATTAAGATGAATAATCCGAATTGAACTAGGACAACCTGGATCTTTTATTGGAGATGACCAAACTTATAATGTAATTGTAACTGCCCACGCATTTGTAATAATTTTCTTTATAGTTATACCAATTATAATTGGAGGATTTGGAAATTGATTAATTCCCTTAATAATTGGAGCACCCGATATAGCATTCCCACGAATGAATAACATAAGATTTTGATTGCTACCACCGTCCCTAACACTTCTAATCATAAGTAGAATTACAGAAAGAGGAGCAGGAACAGGATGAACAGTATACCCTCCATTATCCAGAAACATCGCCCATAGAGGAGCATCTGTAGATTTAGCAATCTTTTCCCTACATCTAGCAGGAGTATCATCAATTTTAGGAGCAGTTAACTTCATTTCAACAATTATTAATATACGACCAGCAGGAATAACCCCAGAACGAATCCCATTATTTGTATGATCTGTAGGAATTACAGCACTACTACTCCTACTTTCATTACCCGTACTAGCAGGAGCCATTACCATACTCTTAACTGACCGAAACTTCAATACTTCTTTTTTTGACCCTGCTGGAGGAGGAGATCCCATCCTATATCAACATCTATTC

然而,在第二个文件中,DNA序列被分成几行,而不是单行,并且它们的长度并不总是相同。

修改

这是我想要的输出:

>ANICH889-10
GGGATTTGGTAATTGATTAGTTCCTTTAATA---TTGGGGGCCCCTGACATAGCTTTTCCTCGTATAAATAATATAAGATTTTGATTATTACCTCCCTCTCTTACATTATTAATTTCAAGAAGAATTGTAGAAAATGGAGCTGGGACTGGATGAACTGTTTACCCTCCTTTATCTTCTAATATCGCCCATAGAGGAAGCTCTGTAGATTTA---GCAATTTTCTCTTTACATTTAGCAGGAATTTCTTCTATTTTAGGAGCAATTAATTTTATTACAACAATTATTAATATACGTTTAAATAATTTATCTTTCGATCAAATACCTTTATTTGTTTGAGCAGTAGGAATTACAGCATTTTTACTATTACTTTCTTTACCTGTATTAGCTGGA---GCTATTACTATATTATTAACT---------------------------------------------------------------------------
>ARONW984-15
TGGTAACTGATTAGTTCCATTAATACTAGGAGCCCCTGATATAGCCTTCCCCCGAATAAATAATATAAGATTTTGACTTTTACCTCCTTCTCTAATTCTTCTTTTATCAAGGTCTATTATNGAAAATGGAGCA---------GGAACTGGCTGAACAGTTTACCCTCCCCTTTCTTNTAATATTTCCCATGCTGGAGCTTCTGTAGATCTTGCAATCTTTTCCCTACACCTAGCAGGTATTTCCTCAATCCTAGGGGCAGTTAAT------TTTATCACAACCGTAATTAACATACGCTCTAGAGGAATTACATTTGATCGAATGCCTTTATTTGTATGATCTGTATTAATTACAGCTATTCTTCTACTACTCTCCCTCCCAGTATTAGCAGGGGCTATTACAATACTACTCACAGACCGAAATTTAAAT-----------------------------------

这是我为此写的python脚本:

from Bio import SeqIO
from Bio.Seq import Seq
import csv
import sys

#Name of the datafile
Taxonomyfile = "02_Arthropoda_specimen_data_less.txt"

#Name of the original sequence file
OrigTaxonSeqsfile = "00_Arthropoda_specimen.fasta"

#Name of the output sequence file
f4 = open("02_Arthropoda_specimen_less.fasta", 'w')

#Reading the datafile and extracting record IDs   
TaxaKeep = []
with open(Taxonomyfile, 'r') as f1:
    datareader = csv.reader(f1, delimiter='\t')
    for item in datareader:
        TaxaKeep.append(item[0])
    print(len(TaxaKeep))    

#Filtering sequence file to keep only those sequences with the desired IDs
datareader = SeqIO.parse(OrigTaxonSeqsfile, "fasta")
for seq in datareader:
    for item in TaxaKeep:
        if item in seq.id:
            f4.write('>' + str(item) + '\n')
            f4.write(str(seq.seq) + '\n')

我认为这里遇到的麻烦是我在480万条记录中的每一条中循环浏览170万条记录名称。我想为480万条记录制作一本字典或其他东西,但我无法弄清楚如何。任何建议(包括非python建议)?

谢谢!

3 个答案:

答案 0 :(得分:3)

我认为你可以通过改进你的查找来创造巨大的性能提升。

使用set()可以帮助您。集合旨在执行非常快速的数据查找,并且它们不存储重复值,这使它们成为过滤数据的理想选择。因此,让我们将输入文件中的所有分类标识存储在一个集合中。

from Bio import SeqIO
from Bio.Seq import Seq
import csv
import sys

taxonomy_file = "02_Arthropoda_specimen_data_less.txt"
orig_taxon_sequence_file = "00_Arthropoda_specimen.fasta"
output_sequence_file = "02_Arthropoda_specimen_less.fasta"

# build a set for fast look-up of IDs
with open(taxonomy_file, 'r', newline='') as fp:
    datareader = csv.reader(fp, delimiter='\t')
    first_column = (row[0] for row in datareader)
    taxonomy_ids = set(first_column)

# use the set to speed up filtering the input FASTA file
with open(output_sequence_file, 'w') as fp:
    for seq in SeqIO.parse(orig_taxon_sequence_file, "fasta"):
        if seq.id in taxonomy_ids: 
            fp.write('>')
            fp.write(seq.id)
            fp.write(seq.seq)
            fp.write('\n')
  • 我已经重命名了一些变量。将变量f4命名为仅在上面的注释中写入“输出序列文件的#Name”是完全没有意义的。为什么不直接删除注释并命名变量output_sequence_file
  • (row[0] for row in datareader)生成器理解。生成器是一个可迭代的对象,这意味着它不会计算ID列表 - 它只知道该做什么。这样可以通过而不是构建临时列表来节省时间和内存。一行之后,接受迭代的set()构造函数将构建第一列中所有ID的集合。
  • 在第二个块中,我们使用if seq.id in taxonomy_ids来检查是否应输出序列ID。 in在集合上非常快。
  • 我打电话给.write()四次,而不是用四个项目构建一个临时字符串。我认为seq.idseq.seq已经是字符串,因此无需调用str()
  • 我对FASTA文件格式了解不多,但快速查看the BioPython documentation表明使用SeqIO.write()是创建格式的更好方法。

答案 1 :(得分:1)

你的推理是正确的,使用两个嵌套的for循环,你需要花费一些时间来完成4.8 million * 1.7 million重复的单一操作。

这就是我们将使用SQLite数据库存储OrigTaxonSeqsfile中包含的所有信息的原因。为何选择SQLite?因为

  • SQLite内置Python
  • SQLite支持索引

我无法开始解释CS理论,但在搜索像你这样的案例中搜索数据时索引是上帝发送的。

对数据建立索引后,只需从数据库中的Taxonomyfile查找每个记录ID,并将其写入f4最终输出文件。

以下代码应该按照您的意愿工作,它具有以下优点:

  • 显示您在处理的行数方面取得的进展
  • 只需要Python 3,不需要生物库
  • 使用生成器,因此不必一次将文件读入内存
  • 不依赖于list / set / dict,因为(在这种特殊情况下)它们可能会消耗太多RAM

这是代码

import sqlite3
from itertools import groupby
from contextlib import contextmanager

Taxonomyfile = "02_Arthropoda_specimen_data_less.txt"
OrigTaxonSeqsfile = "00_Arthropoda_specimen.fasta"

@contextmanager
def create_db(file_name):
    """ create SQLite db, works as context manager so file is closed safely"""
    conn = sqlite.connect(file_name, isolation_level="IMMEDIATE")
    cur = conn.connect()
    cur.execute("""
        CREATE TABLE taxonomy
        ( _id INTEGER PRIMARY KEY AUTOINCREMENT
        , record_id TEXT NOT NULL
        , record_extras TEXT
        , dna_sequence TEXT
        );
        CREATE INDEX idx_taxn_recID ON taxonomy (record_id);
    """)
    yield cur
    conn.commit()
    conn.close()
    return

def parse_fasta(file_like):
    """ generate that yields tuple containing record id, extra info
    in tail of header and the DNA sequence with newline characters
    """
    # inspiration = https://www.biostars.org/p/710/
    try:
        from Bio import SeqIO
    except ImportError:
        fa_iter = (x[1] for x in groupby(file_like, lambda line: line[0] == ">"))
        for header in fa_iter:
            # remove the >
            info = header.__next__()[1:].strip()
            # seprate record id from rest of the seqn info
            x = info.split('|')
            recID, recExtras = x[0], x[1:]
            # build the DNA seq using generator
            sequence = "".join(s.strip() for s in fa_iter.__next__())
            yield recID, recExtras, sequence
    else:
        fasta_sequences = SeqIO.parse(file_like, 'fasta')
        for fasta in fasta_sequences:
            info, sequence = fasta.id, fasta.seq.tostring()
            # seprate record id from rest of the seqn info
            x = info.split('|')
            recID, recExtras = x[0], x[1:]
            yield recID, recExtras, sequence
    return

def prepare_data(txt_file, db_file):
    """ put data from txt_file into db_file building index on record id """
    i = 0
    src_gen = open(txt_file, mode='rt')
    fasta_gen = parse_fasta(src_gen)
    with create_db(db_file) as db:
        for recID, recExtras, dna_seq in fasta_gen:
            db.execute("""
                INSERT INTO taxonomy
                (record_id, record_extras, dna_sequence) VALUES (?,?,?)
                """,
                [recID, recExtras, dna_seq]
            )
            if i % 100 == 0:
                print(i, 'lines digested into sql database')
    src_gen.close()
    return

def get_DNA_seq_of(recordID, src):
    """ search for recordID in src database and return a formatted string """
    ans = ""
    exn = src.execute("SELECT * FROM taxonomy WHERE record_id=?", [recordID])
    for match in exn.fetchall():
        a, b, c, dna_seq = match
        ans += ">%s\n%s\n" % (recordID, dna_seq)
    return ans

def main():
    # first of all prepare an optimized database
    db_file = txt_file + ".sqlite"
    prepare_data(OrigTaxonSeqsfile)
    # now start searching and writing
    progress = 0
    db = sqlite3.connect(db_file)
    cur = db.cursor()
    out_file = open("02_Arthropoda_specimen_less.fasta", 'wt')
    taxa_file = open(Taxonomyfile, 'rt')
    with taxa_file, out_file:
        for line in taxa_file:
            question = line.split("\t")[0]
            answer = get_DNA_seq_of(question, cur)
            out_file.write(answer)
            if progress % 100 == 0:
                print(progress, 'lines processed')
    db.close()

if __name__ == '__main__':
    main()

随意提出任何澄清 如果您收到任何错误或输出不符合预期,请向我发送200行样本,TaxonomyfileOrigTaxonSeqsfile,我将更新代码。

速度提升

以下是一个粗略的估计,只是讨论磁盘I / O,因为这是最慢的部分。

a = 4.8 millionb = 1.7 million

在旧方法中,您必须执行磁盘I / O a * b,即 8160亿 次。

在我的方法中,一旦进行索引(即2 *次),就必须搜索170万条记录。因此,在我的方法中,总时间为2 * (a + b),即1300万磁盘I / O,这也不小,但这种方法超过 60万倍

为什么不dict()

如果我发现使用过多的CPU / RAM,我会受到老板和教授的责骂。如果您拥有该系统,则更简单的基于dict的方法是:

from itertools import groupby

Taxonomyfile = "02_Arthropoda_specimen_data_less.txt"
OrigTaxonSeqsfile = "00_Arthropoda_specimen.fasta"

def parse_fasta(file_like):
    """ generate that yields tuple containing record id, extra info
    in tail of header and the DNA sequence with newline characters
    """
    from Bio import SeqIO
    fasta_sequences = SeqIO.parse(file_like, 'fasta')
    for fasta in fasta_sequences:
        info, sequence = fasta.id, fasta.seq.tostring()
        # seprate record id from rest of the seqn info
        x = info.split('|')
        recID, recExtras = x[0], x[1:]
        yield recID, recExtras, sequence
    return

def prepare_data(txt_file, db_file):
    """ put data from txt_file into dct """
    i = 0
    with open(txt_file, mode='rt') as src_gen:
        fasta_gen = parse_fasta(src_gen)
        for recID, recExtras, dna_seq in fasta_gen:
            dct[recID] = dna_seq
            if i % 100 == 0:
                print(i, 'lines digested into sql database')
    return

def get_DNA_seq_of(recordID, src):
    """ search for recordID in src database and return a formatted string """
    ans = ""
    dna_seq = src[recordID]
    ans += ">%s\n%s\n" % (recordID, dna_seq)
    return ans

def main():
    # first of all prepare an optimized database
    dct = dict()
    prepare_data(OrigTaxonSeqsfile, dct)
    # now start searching and writing
    progress = 0
    out_file = open("02_Arthropoda_specimen_less.fasta", 'wt')
    taxa_file = open(Taxonomyfile, 'rt')
    with taxa_file, out_file:
        for line in taxa_file:
            question = line.split("\t")[0]
            answer = get_DNA_seq_of(question, dct)
            out_file.write(answer)
            if progress % 100 == 0:
                print(progress, 'lines processed')
    return

if __name__ == '__main__':
    main()

答案 2 :(得分:0)

我已经在你的问题的评论中要求澄清,但是现在你没有反应(没有批评),所以在我必须去之前,我会尝试回答你的问题,我的代码基于以下假设。

  1. 在第二个数据文件中,每条记录都有两行,第一行是排序标题,第二行是ACGT序列。
  2. 在标题行中,我们有一个前缀">",然后某些字段以"|"分隔,其中第一个字段是整个双行记录的ID。
  3. 根据上述假设

    # If possible, no hardcoded filenames, use sys.argv and the command line
    import sys
    
    # command line sanity check
    if len(sys.argv) != 4:
         print('A descriptive error message')
         sys.exit(1)
    
    # Names of the input and output files
    fn1, fn2, fn3 = sys.argv[1:]
    
    # Use a set comprehension to load the IDs from the first file
    IDs = {line.split()[0] for line in open(fn1)} # a set
    
    # Operate on the second file
    with open(fn2) as f2:
    
        # It is possible to use `for line in f2: ...` but here we have(?)
        # two line records, so it's a bit different
        while True:
    
            # Try to read two lines from file
            try:
                header = f2.next()
                payload = f2.next()
            # no more lines? break out from the while loop...
            except StopIteration:
                break
    
            # Sanity check on the header line
            if header[0] != ">":
                print('Incorrect header line: "%s".'%header)
                sys.exit(1)
    
            # Split the header line on "|", find the current ID
            ID = header[1:].split("|")[0]
    
            # Check if the current ID was mentioned in the first file
            if ID in IDs:
                # your code
    

    因为没有内环,这应该快6个数量级......如果它能满足您的需求还有待观察: - )