Scrapy Files Pipeline不下载文件

时间:2019-07-16 19:19:08

标签: python web-scraping scrapy

我的任务是构建一个网络爬虫,该爬虫可下载给定站点中的所有.pdf个。 Spider在本地计算机和抓取中心上运行。出于某种原因,当我运行它时,它只会下载部分pdf文件,而不会下载所有pdf文件。通过查看输出JSON中的项目可以看到这一点。

我已设置MEDIA_ALLOW_REDIRECTS = True并尝试在scrapinghub以及本地运行它

这是我的蜘蛛

import scrapy
from scrapy.loader import ItemLoader
from poc_scrapy.items import file_list_Item
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class PdfCrawler(CrawlSpider):
    # loader = ItemLoader(item=file_list_Item())
    downloaded_set = {''}
    name = 'example'
    allowed_domains = ['www.groton.org']
    start_urls = ['https://www.groton.org']

    rules=(
        Rule(LinkExtractor(allow='www.groton.org'), callback='parse_page', follow=True),
    )



    def parse_page(self, response):
        print('parseing' , response)
        pdf_urls = []
        link_urls = []
        other_urls = []
        # print("this is the response", response.text)
        all_href = response.xpath('/html/body//a/@href').extract()

        # classify all links
        for href in all_href:
            if len(href) < 1:
                continue
            if href[-4:] == '.pdf':
                pdf_urls.append(href)
            elif href[0] == '/':
                link_urls.append(href)
            else:
                other_urls.append(href)

        # get the links that have pdfs and send them to the item pipline 
        for pdf in pdf_urls:
            if pdf[0:5] != 'http':
                new_pdf = response.urljoin(pdf)

                if new_pdf in self.downloaded_set:
                    # we have seen it before, dont do anything
                    # print('skipping ', new_pdf)
                    pass
                else: 
                    loader = ItemLoader(item=file_list_Item())
                    # print(self.downloaded_set)   
                    self.downloaded_set.add(new_pdf) 
                    loader.add_value('file_urls', new_pdf)
                    loader.add_value('base_url', response.url)
                    yield loader.load_item()
            else:

                if new_pdf in self.downloaded_set:
                    pass
                else:
                    loader = ItemLoader(item=file_list_Item())
                    self.downloaded_set.add(new_pdf) 
                    loader.add_value('file_urls', new_pdf)
                    loader.add_value('base_url', response.url)
                    yield loader.load_item()

settings.py

MEDIA_ALLOW_REDIRECTS = True
BOT_NAME = 'poc_scrapy'

SPIDER_MODULES = ['poc_scrapy.spiders']
NEWSPIDER_MODULE = 'poc_scrapy.spiders'

ROBOTSTXT_OBEY = True


DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,'poc_scrapy.middlewares.UserAgentMiddlewareRotator': 400,
}


ITEM_PIPELINES = {
    'scrapy.pipelines.files.FilesPipeline':1
}
FILES_STORE = 'pdfs/'

AUTOTHROTTLE_ENABLED = True

这是输出的一小部分

    {
        "file_urls": [
            "https://www.groton.org/ftpimages/542/download/download_3402393.pdf"
        ],
        "base_url": [
            "https://www.groton.org/parents/business-office"
        ],
        "files": []
    },

您会看到pdf文件位于file_urls中但尚未下载,因此有5条警告消息表明其中一些无法下载,但缺少20多个文件。

这是我收到的一些文件的警告消息

[scrapy.pipelines.files] File (code: 301): Error downloading file from <GET http://groton.myschoolapp.com/ftpimages/542/download/Candidate_Statement_2013.pdf> referred in <None>

[scrapy.core.downloader.handlers.http11] Received more bytes than download warn size (33554432) in request <GET https://groton.myschoolapp.com/ftpimages/542/download/download_1474034.pdf>

我希望将下载所有文件,或者至少对所有未下载的文件发出警告消息。也许有解决方法。

任何反馈,我们将不胜感激。谢谢!

1 个答案:

答案 0 :(得分:0)

更新:我意识到问题是robots.txt不允许我访问某些pdf。可以通过使用其他服务下载它们或不遵循robots.txt

来解决此问题。