从几个文本文件中提取URL的循环

时间:2019-12-17 12:40:15

标签: python for-loop extract

我正在尝试使用for循环从多个文件中提取URL列表,但是这导致仅从第一个文件中提取URL列表,重复了10次。我不确定自己在做什么错。另外,我绝对是一个初学者,所以我假设有很多更好的方法来实现我想要的目标,但这是我到目前为止的目标。

type_urls = []
y = 0

for files in cwk_dir:
    while y < 10:
        open('./cwkfiles/cwkfile{}.crawler.idx'.format(y))
        lines = r.text.splitlines()
        header_loc = 7
        name_loc = lines[header_loc].find('Company Name')
        type_loc = lines[header_loc].find('Form Type')
        cik_loc = lines[header_loc].find('CIK')
        filedate_loc = lines[header_loc].find('Date Filed')
        url_loc = lines[header_loc].find('URL')
        firstdata_loc = 9
        for line in lines[firstdata_loc:]:
            company_name = line[:type_loc].strip()
            form_type = line[type_loc:cik_loc].strip()
            cik = line[cik_loc:filedate_loc].strip()
            file_date = line[filedate_loc:url_loc].strip()
            page_url = line[url_loc:].strip()
            typeandurl = (form_type, page_url)
            type_urls.append(typeandurl)
        y = y + 1

2 个答案:

答案 0 :(得分:1)

这是使用pathlib和Python 3的更Python化的方式。

from pathlib import Path

cwk_dir = Path('./cwkfiles')

type_urls = []
header_loc = 7
firstdata_loc = 9

for cwkfile in cwk_dir.glob('cwkfile*.crawler.idx'):
    with cwkfile.open() as f:
        lines = f.readlines()
        name_loc = lines[header_loc].find('Company Name')
        type_loc = lines[header_loc].find('Form Type')
        cik_loc = lines[header_loc].find('CIK')
        filedate_loc = lines[header_loc].find('Date Filed')
        url_loc = lines[header_loc].find('URL')
        for line in lines[firstdata_loc:]:
            company_name = line[:type_loc].strip()
            form_type = line[type_loc:cik_loc].strip()
            cik = line[cik_loc:filedate_loc].strip()
            file_date = line[filedate_loc:url_loc].strip()
            page_url = line[url_loc:].strip()
            type_urls.append((form_type, page_url))

如果要对一小批文件进行测试,请将cwk_dir.glob('cwkfile*.crawler.idx')替换为cwk_dir.glob('cwkfile[0-9].crawler.idx')。如果文件从0开始按顺序编号,那将为您提供第一个文件,然后是文件。

这是将它们整合在一起并以更具可读性的方式的更好方法:

from pathlib import Path


def get_offsets(header):
    return dict(
        company_name = header.find('Company Name'),
        form_type = header.find('Form Type'),
        cik = header.find('CIK'),
        file_date = header.find('Date Filed'),
        page_url = header.find('URL')
    )


def get_data(line, offsets):
    return dict(
        company_name = line[:offsets['form_type']].strip(),
        form_type = line[offsets['form_type']:offsets['cik']].strip(),
        cik = line[offsets['cik']:offsets['file_date']].strip(),
        file_date = line[offsets['file_date']:offsets['page_url']].strip(),
        page_url = line[offsets['page_url']:].strip()
    )


cwk_dir = Path('./cwkfiles')
types_and_urls = []
header_line = 7
first_data_line = 9

for cwkfile in cwk_dir.glob('cwkfile*.crawler.idx'):
    with cwkfile.open() as f:
        lines = f.readlines()
        offsets = get_offsets(lines[header_line])
        for line in lines[first_data_line:]:
            data = get_data(line, offsets)
            types_and_urls.append((data['form_type'], data['page_url']))

答案 1 :(得分:0)

当您进入第二个文件时,while条件失败,因为y已经是10。 尝试在while循环之前将y设置回0:

for files in cwk_dir:
    y = 0
    while y < 10:
        ...

当您在while循环内的第一行中打开文件时,可能需要在退出循环时将其关闭。