如何将scrapy搜寻器中的数据保存到变量中?

时间:2016-11-21 08:04:09

标签: python scrapy

我目前正在构建一个用于显示scrapy蜘蛛收集的数据的Web应用程序。用户发出请求,蜘蛛抓取网站,然后将数据返回给应用程序以便提示。我想直接从scraper检索数据,而不依赖于中间.csv或.json文件。类似的东西:

from scrapy.crawler import CrawlerProcess
from scraper.spiders import MySpider

url = 'www.example.com'
spider = MySpider()
crawler = CrawlerProcess()
crawler.crawl(spider, start_urls=[url])
crawler.start()
data = crawler.data # this bit

3 个答案:

答案 0 :(得分:7)

这并不容易,因为Scrapy是非阻塞的,并且在事件循环中工作;它使用Twisted事件循环,并且Twisted事件循环不可重新启动,所以你不能写crawler.start(); data = crawler.data - 在crawler.start()进程永远运行之后,调用已注册的回调直到它被终止或结束。

这些答案可能是相关的:

如果您在应用中使用事件循环(例如,您有Twisted或Tornado Web服务器),则可以从爬网中获取数据而无需将其存储到磁盘。想法是听item_scraped信号。我正在使用以下助手使其更好:

import collections

from twisted.internet.defer import Deferred
from scrapy.crawler import Crawler
from scrapy import signals

def scrape_items(crawler_runner, crawler_or_spidercls, *args, **kwargs):
    """
    Start a crawl and return an object (ItemCursor instance)
    which allows to retrieve scraped items and wait for items
    to become available.

    Example:

    .. code-block:: python

        @inlineCallbacks
        def f():
            runner = CrawlerRunner()
            async_items = scrape_items(runner, my_spider)
            while (yield async_items.fetch_next):
                item = async_items.next_item()
                # ...
            # ...

    This convoluted way to write a loop should become unnecessary
    in Python 3.5 because of ``async for``.
    """
    crawler = crawler_runner.create_crawler(crawler_or_spidercls)    
    d = crawler_runner.crawl(crawler, *args, **kwargs)
    return ItemCursor(d, crawler)


class ItemCursor(object):
    def __init__(self, crawl_d, crawler):
        self.crawl_d = crawl_d
        self.crawler = crawler

        crawler.signals.connect(self._on_item_scraped, signals.item_scraped)

        crawl_d.addCallback(self._on_finished)
        crawl_d.addErrback(self._on_error)

        self.closed = False
        self._items_available = Deferred()
        self._items = collections.deque()

    def _on_item_scraped(self, item):
        self._items.append(item)
        self._items_available.callback(True)
        self._items_available = Deferred()

    def _on_finished(self, result):
        self.closed = True
        self._items_available.callback(False)

    def _on_error(self, failure):
        self.closed = True
        self._items_available.errback(failure)

    @property
    def fetch_next(self):
        """
        A Deferred used with ``inlineCallbacks`` or ``gen.coroutine`` to
        asynchronously retrieve the next item, waiting for an item to be
        crawled if necessary. Resolves to ``False`` if the crawl is finished,
        otherwise :meth:`next_item` is guaranteed to return an item
        (a dict or a scrapy.Item instance).
        """
        if self.closed:
            # crawl is finished
            d = Deferred()
            d.callback(False)
            return d

        if self._items:
            # result is ready
            d = Deferred()
            d.callback(True)
            return d

        # We're active, but item is not ready yet. Return a Deferred which
        # resolves to True if item is scraped or to False if crawl is stopped.
        return self._items_available

    def next_item(self):
        """Get a document from the most recently fetched batch, or ``None``.
        See :attr:`fetch_next`.
        """
        if not self._items:
            return None
        return self._items.popleft()

API的灵感来自motor,一个用于异步框架的MongoDB驱动程序。使用scrape_items,您可以通过类似于从MongoDB查询中获取项目的方式从扭曲或龙卷风回调中获取项目。

答案 1 :(得分:1)

这可能为时已晚,但可能会对其他人有所帮助,您可以将回调函数传递给Spider并调用该函数以返回数据,如下所示:

我们将要使用的虚拟蜘蛛:

class Trial(Spider):
    name = 'trial'

    start_urls = ['']

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.output_callback = kwargs.get('args').get('callback')

    def parse(self, response):
        pass

    def close(self, spider, reason):
        self.output_callback(['Hi, This is the output.'])

带有回调的自定义类:

from scrapy.crawler import CrawlerProcess
from scrapyapp.spiders.trial_spider import Trial


class CustomCrawler:

    def __init__(self):
        self.output = None
        self.process = CrawlerProcess(settings={'LOG_ENABLED': False})

    def yield_output(self, data):
        self.output = data

    def crawl(self, cls):
        self.process.crawl(cls, args={'callback': self.yield_output})
        self.process.start()


def crawl_static(cls):
    crawler = CustomCrawler()
    crawler.crawl(cls)
    return crawler.output

那么您可以做:

out = crawl_static(Trial)
print(out)

答案 2 :(得分:0)

您可以将变量作为类的属性传递,并将数据存储在其中。

出于诅咒,您需要在蜘蛛类的__init__方法中添加属性。

from scrapy.crawler import CrawlerProcess
from scraper.spiders import MySpider

url = 'www.example.com'
spider = MySpider()
crawler = CrawlerProcess()
data = []
crawler.crawl(spider, start_urls=[url], data)
crawler.start()
print(data)