如何从Scrapy中的上层函数获取url地址?

时间:2017-01-21 05:39:36

标签: python scrapy

在我的Scrapy spider.py中,查看最后两行。我想从url1中的parse()获取网址。如何编码?

class DmozSpider(scrapy.Spider):    
     name = "sh2"

     def __init__(self, category=None, *args, **kwargs):
          # super(MySpider, self).__init__(*args, **kwargs)
          self.start_urls = ['http://esf.suzhou.fang.com/housing/__1_0_0_0_1_0_0/',]      

     def parse(self, response):          
          num = response.xpath('//*[@id="pxBox"]/p/b/text()').extract()[0]
          if int(num) >2000:
               urls = response.xpath('//*[@id="houselist_B03_02"]/div[1]/a/@href').extract()[1:]
               for url in urls:
                    url1 = self.start_urls[0].split('/housing')[0] + url
                    yield scrapy.Request(url1, callback=self.parse0)    
          else:               
               url = self.start_urls[0]
               yield scrapy.Request(url,callback=self.parse1)


     def parse0(self, response): #http://esf.sh.fang.com/housing/25__1_0_0_0_1_0_0/
          num = response.xpath('//*[@id="pxBox"]/p/b/text()').extract()[0]
          if int(num) >2000:
               urls = response.xpath('//*[@id="shangQuancontain"]/a/@href').extract()[1:]
               for url in urls:
                    url2= self.start_urls[0].split('/housing')[0] + url
                    yield scrapy.Request(url2, callback=self.parse1)    
          else:
               #<Here,I want to get the URL address from url1 in function parse>
               yield scrapy.Request(url1,callback=self.parse1)

2 个答案:

答案 0 :(得分:1)

您始终可以将所需的任何数据传递给您的请求,并且可以使用回调方法获取这些数据。

yield Request(url=url, callback=self.parse, meta={"page":1})

在解析方法中,

def parse(self, response):
    page = response.meta["page"] + 1

您的response.url的替代方案

答案 1 :(得分:0)

我明白了。 使用url1 = response.url