scrapy使用CrawlerProcess.crawl()

时间:2017-02-28 14:48:21

标签: python-3.x web-scraping scrapy scrapy-spider scrapinghub

我试图通过脚本以编程方式调用蜘蛛。我无法使用CrawlerProcess通过构造函数覆盖设置。让我用默认的蜘蛛从官方scrapy网站(official scrapy quotes example spider的最后一个代码片段)抓取引号来说明这一点。

class QuotesSpider(Spider):

    name = "quotes"

    def __init__(self, somestring, *args, **kwargs):
        super(QuotesSpider, self).__init__(*args, **kwargs)
        self.somestring = somestring
        self.custom_settings = kwargs


    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield Request(url=url, callback=self.parse)

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

以下是我尝试运行引号spider

的脚本
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy.settings import Settings

    def main():

    proc = CrawlerProcess(get_project_settings())

    custom_settings_spider = \
    {
        'FEED_URI': 'quotes.csv',
        'LOG_FILE': 'quotes.log'
    }
    proc.crawl('quotes', 'dummyinput', **custom_settings_spider)
    proc.start()

4 个答案:

答案 0 :(得分:7)

Scrapy设置有点像Python dicts。 因此,您可以在将设置对象传递给CrawlerProcess之前更新它:

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy.settings import Settings

def main():

    s = get_project_settings()
    s.update({
        'FEED_URI': 'quotes.csv',
        'LOG_FILE': 'quotes.log'
    })
    proc = CrawlerProcess(s)

    proc.crawl('quotes', 'dummyinput', **custom_settings_spider)
    proc.start()

编辑以下OP的评论:

以下是使用CrawlerRunner的变体,每次抓取都有一个新CrawlerRunner,每次迭代时重新配置日志记录,每次都写入不同的文件:

import logging
from twisted.internet import reactor, defer

import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging, _get_handler
from scrapy.utils.project import get_project_settings


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        page = getattr(self, 'page', 1)
        yield scrapy.Request('http://quotes.toscrape.com/page/{}/'.format(page),
                             self.parse)

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('small.author::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }


@defer.inlineCallbacks
def crawl():
    s = get_project_settings()
    for i in range(1, 4):
        s.update({
            'FEED_URI': 'quotes%03d.csv' % i,
            'LOG_FILE': 'quotes%03d.log' % i
        })

        # manually configure logging for LOG_FILE
        configure_logging(settings=s, install_root_handler=False)
        logging.root.setLevel(logging.NOTSET)
        handler = _get_handler(s)
        logging.root.addHandler(handler)

        runner = CrawlerRunner(s)
        yield runner.crawl(QuotesSpider, page=i)

        # reset root handler
        logging.root.removeHandler(handler)
    reactor.stop()

crawl()
reactor.run() # the script will block here until the last crawl call is finished

答案 1 :(得分:1)

我认为在将其作为脚本调用时,无法覆盖Spider类的custom_settings变量,主要是因为在实例化蜘蛛之前正在加载设置。

现在,我并没有特别注意更改custom_settings变量,因为它只是一种覆盖默认设置的方式,而这正是CrawlerProcess提供的,这按预期工作:

import scrapy
from scrapy.crawler import CrawlerProcess


class MySpider(scrapy.Spider):
    name = 'simple'
    start_urls = ['http://httpbin.org/headers']

    def parse(self, response):
        for k, v in self.settings.items():
            print('{}: {}'.format(k, v))
        yield {
            'headers': response.body
        }

process = CrawlerProcess({
    'USER_AGENT': 'my custom user anget',
    'ANYKEY': 'any value',
})

process.crawl(MySpider)
process.start()

答案 2 :(得分:0)

您可以从命令行覆盖设置

https://doc.scrapy.org/en/latest/topics/settings.html#command-line-options

例如:scrapy crawl myspider -s LOG_FILE=scrapy.log

答案 3 :(得分:-1)

您似乎希望每个蜘蛛都有自定义日志。您需要像这样激活日志记录:

spring.profiles.active=production