刮板找不到页面

时间:2013-09-13 22:38:33

标签: python scrapy

我有一个如下所示的蜘蛛,但它似乎没有进入函数parse。有人可以快速浏览一下,如果我错过了什么,请告诉我。我正确实现了SgmlLinkExtractor吗?

蜘蛛应该从左侧边栏中挑出所有链接,从他们创建请求,然后解析下一页的facebook链接。它也应该对SgmlLinkExtractor中指定的其他页面执行此操作。目前,蜘蛛正在运行,但没有解析任何页面。

class PrinzSpider(CrawlSpider):
    name = "prinz"
    allowed_domains = ["prinzwilly.de"]
    start_urls = ["http://www.prinzwilly.de/"]

    rules = (
        Rule(
            SgmlLinkExtractor(
                allow=(r'veranstaltungen-(.*)', ),
            ),
            callback='parse'
            ),
        )

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        startlinks = hxs.select("//ul[@id='mainNav2']/li/a")
        print startlinks
        for link in startlinks:
            giglink = link.select('@href').extract()
            item = GigItem()
            item['gig_link'] = giglink
            request = Request(item['gig_link'], callback='parse_gig_page')
            item.meta['item'] = item
            yield request

    def parse_gig_page(self, response):
        hxs = HtmlXPathSelector(response)
        item = response.meta['item']
        gig_content = hxs.select("//div[@class='n']/table/tbody").extract()
        fb_link = re.findall(r'(?:www.facebook.com/)(.*)', gig_content)
        print '********** FB LINK ********', fb_link
        return item

编辑 * *

settings.py

BOT_NAME = 'gigscraper'

SPIDER_MODULES = ['gigscraper.spiders']
NEWSPIDER_MODULE = 'gigscraper.spiders'

ITEM_PIPLINES = ['gigscraper.pipelines.GigscraperPipeline']

items.py

from scrapy.item import Item, Field

class GigItem(Item):
    gig_link = Field()

pipelines.py

class GigscraperPipeline(object):
    def process_item(self, item, spider):
        print 'here I am in the pipeline'
        return item

1 个答案:

答案 0 :(得分:0)

两个问题:

  • extract()会返回一个列表,但您缺少[0]
  • 请求的回调不应该是字符串,请使用self.parse_gig_page

这是修改后的代码(工作):

import re
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request
from scrapy.item import Item, Field
from scrapy.selector import HtmlXPathSelector


class GigItem(Item):
    gig_link = Field()


class PrinzSpider(CrawlSpider):
    name = "prinz"
    allowed_domains = ["prinzwilly.de"]
    start_urls = ["http://www.prinzwilly.de/"]

    rules = (Rule(SgmlLinkExtractor(allow=(r'veranstaltungen-(.*)',)), callback='parse'),)

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        startlinks = hxs.select("//ul[@id='mainNav2']/li/a")
        for link in startlinks:
            item = GigItem()
            item['gig_link'] = link.select('@href').extract()[0]
            yield Request(item['gig_link'], callback=self.parse_gig_page, meta={'item': item})

    def parse_gig_page(self, response):
        hxs = HtmlXPathSelector(response)
        item = response.meta['item']
        gig_content = hxs.select("//div[@class='n']/table/tbody").extract()[0]
        fb_link = re.findall(r'(?:www.facebook.com/)(.*)', gig_content)
        print '********** FB LINK ********', fb_link
        return item

希望有所帮助。