如何废弃此链接的所有页面

时间:2016-06-23 10:37:06

标签: python scrapy

我想废弃此链接的所有页面http://www.jobisjob.co.uk/search?directUserSearch=true&whatInSearchBox=&whereInSearchBox=london

我尝试了不同的方法,但我没有得到任何解决方案。

下面是我的代码

import scrapy

    class jobisjobSpider(scrapy.Spider):

        enter code here
        name = 'jobisjob'
        allowed_domains = ['jobisjob.co.uk']

        start_urls = ['http://www.jobisjob.co.uk/search?directUserSearch=true&whatInSearchBox=&whereInSearchBox=london']


        def parse(self, response):

            for sel in response.xpath('//div[@id="ajax-results"]/div[@class="offer_list "]/div[@class="box_offer"]/div[@class="offer"]'):

                item = JobgoItem()
                item['title'] = sel.xpath('strong[@class="title"]/a/text()').extract()
                item['description'] = sel.xpath('p[@class="description"]/text()').extract()
                item['company'] = sel.xpath('p[@class="company"]/span[@itemprop="hiringOrganization"]/a[@itemprop="name"]/text()').extract()
                item['location'] = sel.xpath('p[@class="company"]/span/span[@class="location"]/span/text()').extract()


                yield item

            next_page = response.css("div.wrap paginator results > ul > li > a[rel='nofollow']::attr('href')")
            if next_page:

                url = response.urljoin(next_page[0].extract())
                print "next page: " + str(url)

                yield scrapy.Request(url)

任何人都可以帮助解决这个问题,我在python中是全新的

1 个答案:

答案 0 :(得分:0)

下一页选择器中有错误。 您当前的选择器搜索名为wrap的标签,然后在paginator内搜索divwrap。{/ p>

右选择器是

div.wrap.paginator.results > ul > li > a:last-child[rel='nofollow']::attr('href')