我有一个Scrapy多级蜘蛛本地工作,但每次请求都会在Cloud中返回GeneratorExit。
这里有解析方法:
def parse(self, response):
results = list(response.css(".list-group li a::attr(href)"))
for c in results:
meta = {}
for key in response.meta.keys():
meta[key] = response.meta[key]
yield response.follow(c,
callback=self.parse_category,
meta=meta,
errback=self.errback_httpbin)
def parse_category(self, response):
category_results = list(response.css(
".item a.link-unstyled::attr(href)"))
category = response.css(".active [itemprop='title']")
for r in category_results:
meta = {}
for key in response.meta.keys():
meta[key] = response.meta[key]
meta["category"] = category
yield response.follow(r, callback=self.parse_item,
meta=meta,
errback=self.errback_httpbin)
def errback_httpbin(self, failure):
# log all failures
self.logger.error(repr(failure))
这里是追溯:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
GeneratorExit
[stderr] Exception ignored in: <generator object iter_errback at 0x7fdea937a9e8>
File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 1243, in run
self.mainLoop()
File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 1252, in mainLoop
self.runUntilCurrent()
File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 878, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 671, in _tick
taskObj._oneWorkUnit()
--- <exception caught here> ---
File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 63, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
self.crawler.engine.crawl(request=output, spider=spider)
File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 210, in crawl
self.schedule(request, spider)
File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 216, in schedule
if not self.slot.scheduler.enqueue_request(request):
File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 57, in enqueue_request
dqok = self._dqpush(request)
File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 86, in _dqpush
self.dqs.push(reqd, -request.priority)
File "/usr/local/lib/python3.6/site-packages/queuelib/pqueue.py", line 35, in push
q.push(obj) # this may fail (eg. serialization error)
File "/usr/local/lib/python3.6/site-packages/scrapy/squeues.py", line 15, in push
s = serialize(obj)
File "/usr/local/lib/python3.6/site-packages/scrapy/squeues.py", line 27, in _pickle_serialize
return pickle.dumps(obj, protocol=2)
builtins.TypeError: can't pickle HtmlElement objects
我设置了错误但它没有提供任何错误细节。我也在每个请求中都写了meta,但它没有任何区别。我错过了什么吗?
更新 似乎错误是多级蜘蛛特别固有的。现在,我只用一种解析方法重写了这个。
答案 0 :(得分:3)
其中一个differences between running a job locally and on Scrapy Cloud是启用了JOBDIR设置,这使得Scrapy将请求序列化为磁盘队列而不是内存队列。
当序列化到磁盘时,Pickle操作失败,因为您的request.meta
dict包含SelectorList
对象(在行category = response.css(".active [itemprop='title']")
中分配),并且选择器包含{{1}的实例}对象(不能被腌制,这个问题不在Scrapy范围内),因此lxml.html.HtmlElement
。
有merged pull request解决此问题。它没有修复Pickle操作,它的作用是指示调度程序不应该尝试将这些请求序列化到磁盘,而是转到内存。