在Scrapy中使用ItemLoaders()下载文件

时间:2018-12-08 15:00:15

标签: python-3.x scrapy scrapy-spider

我创建了一个爬行蜘蛛来下载文件。但是,蜘蛛程序仅下载文件的url,而不下载文件本身。我在这里Scrapy crawl spider does not download files?上传了一个问题。虽然答案中所建议的基本产量蜘蛛很完美,但是当我尝试使用物品物品加载器下载文件时,蜘蛛却无法工作!原始问题不包括items.py。就是这样:

ITEMS

import scrapy
from scrapy.item import Item, Field


class DepositsusaItem(Item):
    # main fields
    name = Field()
    file_urls = Field()
    files = Field()
    # Housekeeping Fields
    url = Field()
    project = Field()
    spider = Field()
    server = Field()
    date = Field()
    pass

编辑:添加了原始代码 编辑:更多更正

蜘蛛

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import datetime
import socket
from us_deposits.items import DepositsusaItem
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose
from urllib.parse import urljoin


class DepositsSpider(CrawlSpider):
    name = 'deposits'
    allowed_domains = ['doi.org']
    start_urls = ['https://minerals.usgs.gov/science/mineral-deposit-database/#products', ]

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//*[@id="products"][1]/p/a'),
             callback='parse_x'),
    )

    def parse_x(self, response):
        i = ItemLoader(item=DepositsusaItem(), response=response)
        i.add_xpath('name', '//*[@class="container"][1]/header/h1/text()')
        i.add_xpath('file_urls', '//span[starts-with(@data-url, "/catalog/file/get/")]/@data-url',
                    MapCompose(lambda i: urljoin(response.url, i))
                    )
        i.add_value('url', response.url)
        i.add_value('project', self.settings.get('BOT_NAME'))
        i.add_value('spider', self.name)
        i.add_value('server', socket.gethostname())
        i.add_value('date', datetime.datetime.now())
        return i.load_item()

设置

BOT_NAME = 'us_deposits'
SPIDER_MODULES = ['us_deposits.spiders']
NEWSPIDER_MODULE = 'us_deposits.spiders'
ROBOTSTXT_OBEY = False
ITEM_PIPELINES = {
    'us_deposits.pipelines.UsDepositsPipeline': 1,
    'us_deposits.pipelines.FilesPipeline': 2
}

FILES_STORE = 'C:/Users/User/Documents/Python WebCrawling Learning Projects'

管道

class UsDepositsPipeline(object):
    def process_item(self, item, spider):
        return item


class FilesPipeline(object):
    def process_item(self, item, spider):
        return item

1 个答案:

答案 0 :(得分:1)

在我看来,使用物品和/或物品装载器与您的问题无关。

我看到的唯一问题是在您的设置文件中:

  • FilesPipeline未激活(仅us_deposits.pipelines.UsDepositsPipeline被激活)
  • FILES_STORE应该是字符串,而不是集合(激活文件管道时会引发异常)
  • ROBOTSTXT_OBEY = True将阻止文件下载

如果我纠正了所有这些问题,文件下载将按预期进行。