抓狂进入下一页并下载所有文件

时间:2018-11-15 13:28:25

标签: python web-scraping scrapy web-crawler scrapy-spider

我是scrapy和python的新手,我能够从URL获取详细信息,我想输入链接并下载所有文件(.htm和.txt)。

我的代码

import scrapy

class legco(scrapy.Spider):
name = "sec_gov"

start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

def parse(self, response):
    for link in response.xpath('//table[@summary="Results"]//td[@scope="row"]/a/@href').extract():
        absoluteLink = response.urljoin(link)
        yield scrapy.Request(url = absoluteLink, callback = self.parse_page)

def parse_page(self, response):
    for links in response.xpath('//table[@summary="Results"]//a[@id="documentsbutton"]/@href').extract():
        targetLink = response.urljoin(links)
        yield {"links":targetLink}

我需要进入链接并下载所有以.htm和.txt文件结尾的文件。下面的代码不起作用。

if link.endswith('.htm'):
    link = urlparse.urljoin(base_url, link)
    req = Request(link, callback=self.save_pdf)
    yield req                                                       

def save_pdf(self, response):
    path = response.url.split('/')[-1]
    with open(path, 'wb') as f:
        f.write(response.body)

有人可以帮助我吗?预先感谢。

1 个答案:

答案 0 :(得分:0)

请尝试以下操作以将文件下载到您的桌面或脚本中提到的任何位置:

import scrapy, os

class legco(scrapy.Spider):
    name = "sec_gov"

    start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

    def parse(self, response):
        for link in response.xpath('//table[@summary="Results"]//td[@scope="row"]/a/@href').extract():
            absoluteLink = response.urljoin(link)
            yield scrapy.Request(url = absoluteLink, callback = self.parse_links)

    def parse_links(self, response):
        for links in response.xpath('//table[@summary="Results"]//a[@id="documentsbutton"]/@href').extract():
            targetLink = response.urljoin(links)
            yield scrapy.Request(url = targetLink, callback = self.collecting_file_links)

    def collecting_file_links(self, response):
        for links in response.xpath('//table[contains(@summary,"Document")]//td[@scope="row"]/a/@href').extract():
            if links.endswith(".htm") or links.endswith(".txt"):
                baseLink = response.urljoin(links)
                yield scrapy.Request(url = baseLink, callback = self.download_files)

    def download_files(self, response):
        path = response.url.split('/')[-1]
        dirf = r"C:\Users\WCS\Desktop\Storage"
        if not os.path.exists(dirf):os.makedirs(dirf)
        os.chdir(dirf)
        with open(path, 'wb') as f:
            f.write(response.body)

为清楚起见:您需要明确指定dirf = r"C:\Users\WCS\Desktop\Storage",其中C:\Users\WCS\Desktop或其他内容将是您想要的位置。但是,该脚本将自动创建Storage文件夹以将其中的文件保存在其中。