ScrapyDeprecationWaring:不推荐使用Command的默认`crawler`并将其删除。使用`create_crawler`方法实例化爬虫

时间:2013-10-07 17:24:40

标签: scrapy

Scrapy版本0.19

我正在使用此页面上的代码(Run multiple scrapy spiders at once using scrapyd)。当我运行scrapy allcrawl时,我得到了

ScrapyDeprecationWaring: Command's default `crawler` is deprecated and will be removed. Use `create_crawler` method to instantiate crawlers

以下是代码:

from scrapy.command import ScrapyCommand
import urllib
import urllib2
from scrapy import log

class AllCrawlCommand(ScrapyCommand):

    requires_project = True
    default_settings = {'LOG_ENABLED': False}

    def short_desc(self):
        return "Schedule a run for all available spiders"

    def run(self, args, opts):
        url = 'http://localhost:6800/schedule.json'
        for s in self.crawler.spiders.list(): #this line raise the warning
            values = {'project' : 'YOUR_PROJECT_NAME', 'spider' : s}
            data = urllib.urlencode(values)
            req = urllib2.Request(url, data)
            response = urllib2.urlopen(req)
            log.msg(response)

如何修复DeprecationWarning?

由于

1 个答案:

答案 0 :(得分:1)

使用:

crawler = self.crawler_process.create_crawler()