如何使用python将单列文本文件解析为表?

时间:2016-08-11 02:58:00

标签: python web-scraping

我是StackOverflow的新手,但我在这个网站上找到了很多答案。我也是一个编程新手,所以我想我会加入并最终成为这个社区的一部分 - 从关于一个困扰我几个小时的问题的问题开始。

我登录网站并在b标签内刮取大量文本以转换为合适的表格。生成的Output.txt的布局如下所示:

BIN                   STATUS                                                   
8FHA9D8H 82HG9F     RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS          


INVENTORY CODE:   FPBC   *SOUP CANS LENTILS                                 

BIN                   STATUS                                                   
HA8DHW2H HD0138     RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS          
8SHDNADU 00A123     #2956- INVALID STOCK COUPON CODE (MISSING).          
93827548 096DBR     RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS          

有一堆页面具有完全相同的块,但我需要将它们组合成一个看起来像这样的ACTUAL表:

      BIN               INV CODE                          STATUS                                                   
HA8DHW2HHD0138     FPBC-*SOUP CANS LENTILS    RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS          
8SHDNADU00A123     FPBC-*SOUP CANS LENTILS    #2956- INVALID STOCK COUPON CODE (MISSING).          
93827548096DBR     FPBC-*SOUP CANS LENTILS    RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS          
8FHA9D8H82HG9F   SSXR-98-20LM NM CORN CREAM  RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS  

本质上,此示例中的所有单独文本块都将成为此表的一部分,其中inv代码以其Bin值重复。我会尝试解析这些数据(尝试过Pandas / bs / openpyxl / csv编写器),但不好承认它们有点尴尬,因为我无法找到有关此特定问题的任何信息。有没有任何仁慈的灵魂可以帮助我? :)

(另外,我使用的是Python 2.7)

2 个答案:

答案 0 :(得分:0)

如下所示的简单自定义解析器应该可以解决问题。

from __future__ import print_function



def parse_body(s):
    line_sep = '\n'
    getting_bins = False
    inv_code = ''
    for l in s.split(line_sep):
        if l.startswith('INVENTORY CODE:') and not getting_bins:
            inv_data = l.split()
            inv_code = inv_data[2] + '-' + ' '.join(inv_data[3:])
        elif l.startswith('INVENTORY CODE:') and getting_bins:
            print("unexpected inventory code while reading bins:", l)
        elif l.startswith('BIN') and l.endswith('MESSAGE'):
            getting_bins = True
        elif getting_bins == True and l:
            bin_data = l.split()
            # need to add exception handling here to make sure:
            # 1) we have an inv_code
            # 2) bin_data is at least 3 items big (assuming two for
            #    bin_id and at least one for message)
            # 3) maybe some constraint checking to ensure that we have
            #    a valid instance of an inventory code and bin id
            bin_id = ''.join(bin_data[0:2])
            message = ' '.join(bin_data[2:])
            # we now have a bin, an inv_code, and a message to add to our table
            print(bin_id.ljust(20), inv_code.ljust(30), message, sep='\t')
        elif getting_bins == True and not l:
            # done getting bins for current inventory code
            getting_bins = False
            inv_code = ''

答案 1 :(得分:-2)

我为网站报废编写了一些可能对您有帮助的代码。 基本上你需要做的是写点击网页转到HTML并尝试找到你正在寻找的表的标签,并使用模块(我使用漂亮的汤)提取信息。我正在创建一个json,因为我需要将它存储到mongodb中,你可以创建表。

#! /usr/bin/python

import sys
import requests
import re
from BeautifulSoup import BeautifulSoup
import pymongo

def req_and_parsing():
        url2 = 'http://businfo.dimts.in/businfo/Bus_info/EtaByRoute.aspx?ID='

        list1 = ['534UP','534DOWN']
        for Route  in list1:
                final_url = url2 + Route
                #r = requests.get(final_url)
                #parsing_file(r.text,Route)

        outdict = []
        outdict = [parsing_file( requests.get(url2+Route).text,Route)  for Route in list1 ]
        print outdict
        conn = f_connection()
        for i in range(len(outdict)):
                insert_records(conn,outdict[i])


def parsing_file(txt,Route):
        soup = BeautifulSoup(txt)
        table = soup.findAll("table",{"id" : "ctl00_ContentPlaceHolder1_GridView2"})
        #trtags = table[0].findAll('tr')

        tdlist = []
        trtddict = {}
        """
        for trtag in trtags:
                print 'print trtag- ' ,  trtag.text
                tdtags =  trtag.findAll('td')
                for tdtag in tdtags:
                        print tdtag.text
        """

        divtags = soup.findAll("span",{"id":"ctl00_ContentPlaceHolder1_ErrorLabel"})

        for divtag in divtags:
        for divtag in divtags:
                print "div tag - " , divtag.text
                if  divtag.text ==  "Currently no bus is running on this route" or "This is not a cluster (orange bus) route":
                        print "Page not displayed Errored with below meeeage for Route-", Route," , " , divtag.text
                        sys.exit()

        trtags = table[0].findAll('tr')
        for trtag in trtags:
                tdtags =  trtag.findAll('td')
                if len(tdtags) == 2:
                        trtddict[tdtags[0].text] = sub_colon(tdtags[1].text)
        return trtddict


def sub_colon(tag_str):
        return re.sub(';',',',tag_str)

def f_connection():
        try:
                conn=pymongo.MongoClient()
                print "Connected successfully!!!"
        except pymongo.errors.ConnectionFailure, e:
                print "Could not connect to MongoDB: %s" % e
        return conn

def insert_records(conn,stop_dict):
        db = conn.test
        print db.collection_names()
        mycoll = db.stopsETA
        mycoll.insert(stop_dict)

if __name__ == "__main__":
        req_and_parsing()