使用Python解析非结构化文本文件

时间:2015-06-10 22:00:50

标签: python regex parsing csv pyparsing

我有一个文本文件,其中几个片段如下所示:

Page 1 of 515                   
Closing Report for Company Name LLC                 

222 N 9th Street, #100 & 200, Las Vegas, NV, 89101                  

File number:    Jackie Grant    Status: Fell Thru   Primary closing party:  Seller
Acceptance: 01/01/2001  Closing date:   11/11/2011  Property type:  Commercial Lease
MLS number: Sale price: $200,000    Commission: $1,500.00   
Notes:  08/15/2000 02:30PM by Roger Lodge This property is a Commercial Lease handled by etc..  

Seller: Company Name LLC                    
Company name:   Company Name LLC                
Address:    222 N 9th Street, #100 & 200, Las Vegas, NV, 89101              
Home:   Pager:              
Business:   Fax:                
Mobile: Email:              
Buyer: Tomlinson, Ladainian                 
Address:    222 N 9th Street, #100 & 200, Las Vegas, NV, 89101              
Home:   Pager:              
Business:   555-555-5555    Fax:            
Mobile: Email:              
Lessee Agent: Blank, Arthur                 
Company name:   Sprockets Inc.              
Address:    5001 Old Man Dr, North Las Vegas, NV, 89002             
Home:   (575) 222-3455  Pager:          
Business:   Fax:    999-9990            
Mobile: (702) 600-3492  Email:  sprockets@yoohoo.com        
Leasing Agent: Van Uytnyck, Chameleon                   
Company name:   Company Name LLC                
Address:                    
Home:   Pager:              
Business:   Fax:    909-222-2223            
Mobile: 595-595-5959    Email:          

(should be 2 spaces here.. this is not in normal text file)


Printed on Friday, June 12, 2015                    
Account owner: Roger Goodell                    
Page 2 of 515                   
Report for Adrian (Allday) Peterson                     

242 N 9th Street, #100 & 200                    

File number:    Soap    Status: Closed/Paid Primary closing party:  Buyer
Acceptance: 01/10/2010  Closing date:   01/10/2010  Property type:  RRR
MLS number: Sale price: $299,000    Commission: 33.00%  

Seller: SOS, Bank                   
Address:    242 N 9th Street, #100 & 200                
Home:   Pager:              
Business:   Fax:                
Mobile: Email:              
Buyer: Sabel, Aaron                 
Address:                    
Home:   Pager:              
Business:   Fax:                
Mobile: Email:  sia@yoohoo.com          
Escrow Co: Schneider, Patty                 
Company name:   National Football League                
Address:    242 N 9th Street, #100 & 200                
Home:   Pager:              
Business:   800-2009    Fax:    800-1100        
Mobile: Email:              
Buyers Agent: Munchak, Mike                 
Company name:   Commission Group                
Address:                    
Home:   Pager:              
Business:   Fax:                
Mobile: 483374-3892 Email:  donation@yoohoo.net     
Listing Agent: Ricci, Christina                 
Company name:   Other Guys              
Address:                    
Home:   Pager:              
Business:   Fax:                
Mobile: 888-333-3333    Email:  general.adama@cylon.net      

这是我的代码:

import re

file = open('file-path.txt','r')

# if there are more than two consecutive blank lines, then we start a new Entry
entries = []
curr = []
prev_blank = False
for line in file:
    line = line.rstrip('\n').strip()
    if (line == ''):
        if prev_blank == True:
            # end of the entry, create append the entry
            if(len(curr) > 0):
                entries.append(curr)
                print curr
                curr = []
                prev_blank = False
        else:
            prev_blank = True
    else:
        prev_blank = False
        # we need to parse the line
        line_list = line.split()
        str = ''
        start = False
        for item in line_list:
            if re.match('[a-zA-Z\s]+:.*',item):
                if len(str) > 0:
                    curr.append(str)
                str = item
                start = True
            elif start == True:
                str = str + ' ' + item

这是输出:

['number: Jackie Grant', 'Status: Fell Thru Primary closing', 'Acceptance: 01/01/2001 Closing', 'date: 11/11/2011 Property', 'number: Sale', 'price: $200,000', 'Home:', 'Business:', 'Mobile:', 'Home:', 'Business: 555-555-5555', 'Mobile:', 'Home: (575) 222-3455', 'Business:', 'Mobile: (702) 600-3492', 'Home:', 'Business:', 'Mobile: 595-595-5959']

我的问题如下:

  1. 首先,应该有2条记录作为输出,而我只输出一条。
  2. 在文本的顶部,我的脚本无法知道上一个值的结束位置,并且新的值开始:'状态:通过'应该是一个值,'主要关闭方:','买方 验收时间:2010年1月10日','截止日期:2010年1月10日','物业类型:RRR',' MLS号码:' ,'促销价:299,000美元'佣金:33.00%'应该被抓住。
  3. 一旦正确解析了这一点,我将需要再次解析以从值中分离键(即'截止日期':01/10/2010),理想情况下是在一个词典列表中。
  4. 除了使用正则表达式挑选密钥,然后抓住后面的文本片段之外,我无法想到更好的方法。

    完成后,我想要一个带有键的标题行的csv,我可以将其导入带有read_csv的pandas。我在这个上花了不少时间..

2 个答案:

答案 0 :(得分:3)

(这不是一个完整的答案,但评论的时间太长了。)

  • 字段名称可以包含空格(例如MLS number
  • 每一行都可以显示多个字段(例如Home: Pager:
  • Notes字段中有时间,其中包含:

这意味着您无法通过正则表达式识别字段名。它不可能知道“MLS”是否是先前数据值或后续字段名的一部分。

部分Home: Pager:行指向卖方,部分指向买方或承租人代理或租赁代理。这意味着我在下面采用的天真的逐行方法也不起作用。

这是我正在处理的代码,它针对您的测试数据运行但由于上述原因而输出的输出不正确。这是我参与的方法的参考:

replaces = [
    ('Closing Report for', 'Report_for:')
    ,('Report for', 'Report_for:')
    ,('File number', 'File_number')
    ,('Primary closing party', 'Primary_closing_party')
    ,('MLS number', 'MLS_number')
    ,('Sale Price', 'Sale_Price')
    ,('Account owner', 'Account_owner')
    # ...
    # etc.
]

def fix_linemash(data):
    # splits many fields on one line into several lines

    results = []
    mini_collection = []
    for token in data.split(' '):
        if ':' not in token:
            mini_collection.append(token)
        else:
            results.append(' '.join(mini_collection))
            mini_collection = [token]

    return [line for line in results if line]

def process_record(data):
    # takes a collection of lines
    # fixes them, and builds a record dict
    record = {}

    for old, new in replaces:
        data = data.replace(old, new)

    for line in fix_linemash(data):
        print line
        name, value = line.split(':', 1)
        record[name.strip()] = value.strip()

    return record


records = []
collection = []
blank_flag = False

for line in open('d:/lol.txt'):
    # Read through the file collecting lines and
    # looking for double blank lines
    # every pair of blank lines, process the stored ones and reset

    line = line.strip()
    if line.startswith('Page '): continue
    if line.startswith('Printed on '): continue

    if not line and blank_flag:      # record finished
        records.append( process_record(' '.join(collection)) )
        blank_flag = False
        collection = []

    elif not line:  # maybe end of record?
        blank_flag = True

    else:   # false alarm, record continues
        blank_flag = False
        collection.append(line)

for record in records:
    print record

我现在认为对数据进行一些预处理整理步骤会更好一点:

  1. 删除“Page n of n”和“Printed on ...”行以及类似的
  2. 识别所有有效的字段名称,然后拆分组合的行,这意味着每行只有一个字段,字段从行的开头开始。
  3. 运行并仅处理卖方/买方/代理商区块,用标识前缀替换字段名称,例如Email: - > Seller Email:
  4. 然后编写一个记录解析器,它应该很容易 - 检查两个空白行,在第一个冒号处拆分行,使用左位作为字段名称,使用右位作为值。你想要的存储(nb。字典键是无序的)。

答案 1 :(得分:0)

我想通过点击" Page"来开始新记录会更容易。

只是分享一下我自己的经验 - 编写一个通用的解析器太难了。

鉴于此处的数据,情况并非如此糟糕。不使用简单列表来存储条目,而是使用对象。将所有其他字段作为属性/值添加到对象。