我在Celery中使用Scrapy蜘蛛,我随机收到这种错误
Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 428, in fireEvent
DeferredList(beforeResults).addCallback(self._continueFiring)
File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 321, in addCallback
callbackKeywords=kw)
File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 310, in addCallbacks
self._runCallbacks()
File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 441, in _continueFiring
callable(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/twisted/internet/base.py", line 667, in disconnectAll
selectables = self.removeAll()
File "/usr/lib/python2.7/site-packages/twisted/internet/epollreactor.py", line 191, in removeAll
[self._selectables[fd] for fd in self._reads],
exceptions.KeyError: 94
号码在不同情况下变化(94在另一种情况下可能是97,依此类推)
我正在使用:
celery==3.1.19
Django==1.9.4
Scrapy==1.3.0
这就是我在Celery中运行Scrapy的方式:
from billiard import Process
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
class MyCrawlerScript(Process):
def __init__(self, **kwargs):
Process.__init__(self)
settings = get_project_settings('my_scraper')
self.crawler = CrawlerProcess(settings)
self.spider_name = kwargs.get('spider_name')
self.kwargs = kwargs
def run(self):
self.crawler.crawl(self.spider_name, qwargs=self.kwargs)
self.crawler.start()
def my_crawl_manager(**kwargs):
crawler = MyCrawlerScript(**kwargs)
crawler.start()
crawler.join()
在芹菜任务中,我打电话给:
my_crawl_manager(spider_name='my_spider', url='www.google.com/any-url-here')
请知道为什么会这样?
答案 0 :(得分:0)
我曾经遇到过这个问题。
检查__init__.py
个文件夹中是否有空文件spiders
文件。它应该在那里。