使用scrapy获取下一页

时间:2016-05-04 17:17:42

标签: scrapy web-crawler

我有兴趣从此页面获取亚特兰大的承包商数据:

http://www.1800contractor.com/d.Atlanta.GA.html?link_id=3658

因此我可以打开类别的链接

'添加&重塑'
' Architects&工程师'
'喷泉&池塘'
......
.....
.....

但我只能打开第一页:

http://www.1800contractor.com/d.Additions-Remodeling.Atlanta.GA.-12001.html?startingIndex=0&showDirectory=true

我试图打开下一个带有'下一个'的链接。按钮:

next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first()
absolute_next_page_url = response.urljoin(next_page_url)
request = scrapy.Request(absolute_next_page_url)
yield request

但它没有任何区别。

这是我蜘蛛的代码:

import scrapy


class Spider_1800(scrapy.Spider):
    name = '1800contractor'
    allowed_domains = ['1800contractor.com']
    start_urls = (
        'http://www.1800contractor.com/d.Atlanta.GA.html?link_id=3658',
    )

    def parse(self, response):
        urls = response.xpath('/html/body/center/table/tr/td[2]/table/tr[6]/td/table/tr[2]/td/b/a/@href').extract()

        for url in urls:
            absolute_url = response.urljoin(url)
            request = scrapy.Request(
                absolute_url, callback=self.parse_contractors)
            yield request

        # process next page

        next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        request = scrapy.Request(absolute_next_page_url)
        yield request

    def parse_contractors(self, response):
        name = response.xpath(
            '/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()
        contrator = {
           'name': name,

            'url': response.url}
        yield contrator

2 个答案:

答案 0 :(得分:2)

您没有对正确的请求进行分页,parse处理使用start_urls中的网址生成的请求,这意味着您需要先在http://www.1800contractor.com/d.Atlanta.GA.html?link_id=3658中输入每个类别。

def parse(self, response):
    urls = response.xpath('/html/body/center/table/tr/td[2]/table/tr[6]/td/table/tr[2]/td/b/a/@href').extract()

    for url in urls:
        absolute_url = response.urljoin(url)
        request = scrapy.Request(
            absolute_url, callback=self.parse_contractors)
        yield request

def parse_contractors(self, response):
    name = response.xpath(
        '/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()
    contrator = {
       'name': name,

        'url': response.url}
    yield contrator

    next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first()
    if next_page_url:
        absolute_next_page_url = response.urljoin(next_page_url)
        yield scrapy.Request(absolute_next_page_url, callback=self.parse_contractors)

答案 1 :(得分:0)

点击start_url后,你的xpath为承包商挑选网址是行不通的。下一页出现在承包商页面上,因此在承包商网址之后调用。这对你有用

def parse(self, response):
    urls = response.xpath('//table//*[@class="hiCatNaked"]').extract()

    for url in urls:
      absolute_url = response.urljoin(url)
      request = scrapy.Request(
        absolute_url, callback=self.parse_contractors)
      yield request

def parse_contractors(self, response):
    name=response.xpath('/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()

    contrator = {
   'name': name,
    'url': response.url}
    yield contrator

    next_page_url = response.xpath('//a[b[contains(.,'Next')]]/@href').extract_first()
    if next_page_url:
        absolute_next_page_url = response.urljoin(next_page_url)
        yield scrapy.Request(absolute_next_page_url, callback=self.parse_contractors)