如果我们使用meta属性或从先前的解析方法传递的Request()对象实例化Item或ItemLoader,则Scrapy合同将失败。
我在考虑可能会覆盖ScrapesContract来预处理请求并在request.meta中加载一些虚拟值,但不确定这是不是很好。
我在docs中看到pre_process
方法(在底部的HasHeaderContract中说明)从请求对象中获取属性,但我不确定它是否可用于设置属性。
编辑:更多细节。示例爬虫的方法:
def parse_level_one(self, response):
# populate loader
return Request(url=url, callback=self.parse_level_two, meta={'loader': loader.load_item()})
def parse_level_two(self, response):
"""Parse product detail page
@url http://example.com
@scrapes some_field1 some_field2
"""
loader = MyItemLoader(response.meta['loader'], response=response)
在cli中
$ scrapy check crawlername
Traceback... loader = MyItemLoader(response.meta['loader'], response=response)
KeyError: 'loader'
我正在考虑的想法是:
class LoadedScrapesContract(Contract):
""" Contract to check presence of fields in scraped items
@loadedscrapes page_name page_body
"""
name = 'loadedscrapes'
def pre_process(self, response):
# MEDDLE WITH THE RESPONSE OBJECT HERE
# TO ADD A META ATTRIBUTE TO RESPONSE,
# LIKE AN EMPTY Item() or dict, JUST TO MAKE
# THE ITEM LOADER INSTANTIATION PASS
# this is same as ScrapesContract
def post_process(self, output):
for x in output:
if isinstance(x, BaseItem):
for arg in self.args:
if not arg in x:
raise ContractFail("'%s' field is missing" % arg)
答案 0 :(得分:0)
我发现的最佳解决方案是执行以下操作,而不是破坏合同
loader = MyItemLoader(response.meta.get('loader', MyItem()), response=response)
我更喜欢这种方法,但要坚持提问,请覆盖adjust_request_args