scrapy递归爬虫问题

时间:2014-12-13 11:47:46

标签: python recursion scrapy

我正在尝试抓取viagogo.com 我想从页面上抓取每个节目: http://www.viagogo.com/Concert-Tickets/Rock-and-Pop 我能够在第一页上获得该节目,但是当我试图移动下一页时它就不会爬行! 这是我的代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from viagogo.items import ViagogoItem
from scrapy.http import Request, FormRequest

class viagogoSpider(CrawlSpider):
    name="viagogo"
    allowed_domains=['viagogo.com']
    start_urls = ["http://www.viagogo.com/Concert-Tickets/Rock-and-Pop"]

    rules = (
        # Running on pages
        Rule(SgmlLinkExtractor(restrict_xpaths=('//*[@id="clientgridtable"]/div[2]/div[2]/div/ul/li[7]/a')), callback='Parse_Page', follow=True),

        # Running on artists in title
        Rule(SgmlLinkExtractor(restrict_xpaths=('//*[@id="clientgridtable"]/table/tbody')), callback='Parse_artists_Tickets', follow=True),

    )

     #all_list = response.xpath('//a[@class="t xs"]').extract()

    def Parse_Page(self, response):
        item = ViagogoItem()
        item["title"] = response.xpath('//title/text()').extract()
        item["link"] = response.url
        print 'Page!' + response.url
        yield Request(url=response.url, meta={'item': item}, callback=self.Parse_Page)


    def Parse_artists_Tickets(self, response):
        item = ViagogoItem()
        item["title"] = response.xpath('//title/text()').extract()
        item["link"] = response.url
        print response.url
        with open('viagogo_output', 'a') as f:
            f.write(str(item["title"]) + '\n')
        return item

我无法理解我做错了什么,但输出(文件里面)只是第一页显示..

谢谢!

1 个答案:

答案 0 :(得分:0)

此:

yield Request(url=response.url, ...)

要求Scrapy再次抓取它之前抓取的同一页面,而不是真正进入下一页。默认情况下,Scrapy启用了dupefilter,可以避免发出重复请求 - 这可能是第二个请求没有发生而第二个回调从未被调用的原因。

如果你想继续从同一个响应中解析更多的项目,你可以直接调用第二个回调来传递响应。

def Parse_Page(self, response):
    # extract and yield some items here...
    for it in self.Parse_Page(response):
        yield it

如果您想要关注其他页面,则必须向以前未见过的网址发出请求。