从2个单独的列表中提取信息

时间:2014-05-08 11:20:39

标签: python list python-2.7 extract

我想使用python从大文件中提取某些信息。 我有3个输入文件。 第一个输入文件(input_file)是数据文件,它是一个3列的制表符分隔文件,如下所示:

engineer-n imposition-n 2.82169386609e-05
motor-n imposition-n 0.000102011705117
creature-n imposition-n 0.000121321951973
bomb-n imposition-n 0.000680302090112
sedation-n oppression-n 0.000397074586994
roadblock-n oppression-n 5.96190620847e-05
liability-n oppression-n 0.012845281978
currency-n oppression-n 0.000793989880202

第二个输入文件(colA_file)是一个1列的列表,如下所示:

bomb-n
sedation-n
roadblock-n
surrender-n

第三个输入文件(colB_file)也是一个列列表(对于具有不同信息的colA_file是必需的),如下所示:

adjective-n
homeless-n
imposition-n
oppression-n

我想从colA和colB中找到的输入文件中提取信息。 使用我提供的示例数据,这将意味着过滤除以下行之外的所有信息:

bomb-n imposition-n 0.000680302090112
sedation-n oppression-n 0.000397074586994
roadblock-n oppression-n 5.96190620847e-05

我用Python编写了以下代码来解决这个问题:

def test_fnc(input_file, colA_file, colB_file, output_file):
    nounA = []
    with open(colA_file, "rb") as opened_colA:
        for aLine in opened_colA:
            nounA.append(aLine.strip())
            #print nounA

    nounB = []
    with open(colB_file, "rb") as opened_colB:
        for bLine in opened_colB:
            nounB.append(bLine.strip())
            #print nounB

    with open(output_file, "wb") as outfile:
        with open(input_file, "rb") as opened_input:
            for cLine in opened_input:
                splitted_cLine = cLine.split()
                #print splitted_cLine
                if splitted_cLine[0] in nounA and splitted_cLine[1] in nounB:
                    outstring = "\t".join(splitted_cLine)
                    outfile.write(outstring + "\n")

test_fnc(input_file, colA_file, colB_file, output_file)

但是,它只输出1行,就好像它没有迭代所提供的列表输入一样。 似乎我的列表彼此相互附加,从一个项目开始,并使用每个附加项目递增自身。 因此,我也尝试将这些列表引用如下:

    for bLine in opened_colB:
        nounB = bLine

与上述结果相同。

2 个答案:

答案 0 :(得分:1)

import re

nounA=[]
with open('col1.txt', "rb") as opened_colA:
    for aLine in opened_colA:
        nounA.append(aLine.strip())

patterns = [r'\b%s\b' % re.escape(s.strip()) for s in nounA]
col1 = re.compile('|'.join(patterns))
nounB=[]
with open('col2.txt', "rb") as opened_colA:
    for aLine in opened_colA:
        nounB.append(aLine.strip())

patterns = [r'\b%s\b' % re.escape(s.strip()) for s in nounB]
col2 = re.compile('|'.join(patterns))

with open('test1.txt', "rb") as opened_colA:
    for aLine in opened_colA:
        if col1.search(aLine):
            if col2.search(aLine):
                print aLine

# just write aline to your output file.

说明:首先我正在使用colA中的所有单词并制作正则表达式;与col2类似。现在使用该正则表达式,我正在搜索输入文件并打印结果

'\b'是单词边界。如果您要搜索单词'cat',但可能会找到'catch''\b'非常有用,只能找到单词'cat'

答案 1 :(得分:1)

如果您不介意依赖,我会使用pandasnumpy。使用pandas.DataFrame,您可以对其列执行isin检查。否则我建议使用集合,因为正则表达式应该慢得多。像这样:

with open(colA_file, "rb") as file_h:
    noun_a = set(line.strip() for line in file_h)

with open(colB_file, "rb") as file_h:
    noun_b = set(line.strip() for line in file_h)

with open(output_file, "wb") as outfile:
    with open(input_file, "rb") as opened_input:
        for line in opened_input:
            split_line = line.split()
            if split_line[0] in noun_a and split_line[1] in noun_b:
                outfile.write(line)