这就是我想要实现的目标:
class Hello(Spider):
#some stuff
def parse(self, response):
#get a list of url of cities using pickle and store in a list
#Now for each city url I have to get list of monuments (using selenium) which is achieved by the below loops
for c in cities:
#get the list of monuments using selenium and iterate through each monument url contained in the division
divs = sel.xpath('some xpath/div')
for div in divs:
monument_url=''.join(div.xpath('some xpath'))
#For each monument url get the response and scrape the information
yield Request(monument_url, self.parse_monument)
def parse_monument(self, response):
#scrape some information and return to the loop(i.e. return to "for div in divs:")
现在发生的事情是: 1.在执行收益表之前,我得到了全市所有纪念碑的清单 2.每当执行yield语句时,它将转到parse_monument函数,并且不会返回循环,只会抓取第一个城市中存在的纪念碑列表。
有没有办法做到这一点?有没有办法获得请求方法传递给parse_monument的响应对象,而无需转到parse_monument方法,以便我可以使用选择器从响应中选择我需要的元素?
谢谢!!
答案 0 :(得分:0)
我认为你不能像你那样回调函数。这是一个重构:
class HelloSpider(scrapy.Spider):
name = "hello"
allowed_domains = ["hello.com"]
start_urls = (
'http://hello.com/cities'
)
def parse(self, response):
cities = ['London','Paris','New-York','Shanghai']
for city in cities:
xpath_exp= 'some xpath[city="' + city + '"]/div/some xpath'
for monument_url in response.xpath(xpath_exp).extract():
yield Request(monument_url, callback=self.parse_monument)
def parse_monument(self,response):
pass
答案 1 :(得分:0)
Request
是一个对象,而不是一个方法。 Scrapy将处理生成的Request对象并异步执行回调。您可以将请求视为线程对象。
解决方法是反过来,您需要将parse
方法所需的数据传递给Request,以便在parse_monument
内处理它们。
class Hello(Spider):
def parse(self, response):
for c in cities:
divs = sel.xpath('some xpath/div')
for div in divs:
monument_url=''.join(div.xpath('some xpath'))
data = ... # set the data that you need from this loop
# pass the data into request's meta
yield Request(monument_url, self.parse_monument, meta={'data': data})
def parse_monument(self, response):
# retrieve the data from response's meta
data = response.meta.get('data')
...