Scrapy,我正在尝试提取网站上个人资料的名称和链接

时间:2013-09-18 10:45:28

标签: python web-scraping scrapy

我是scrapy的新手,我正在尝试从网站中提取数据。我相信我有一个逻辑错误,因为我的蜘蛛爬行页面,但它不会返回任何刮取的数据我会非常感激!

rules = (
    Rule(
        SgmlLinkExtractor(
            allow=(r'.*',),
            restrict_xpaths=('//div/div/div/span/a',) #This is the XPath for profiles links that direct to individual pages
        ),
        callback='parse_item',
        follow=True
    ),
      Rule(
        SgmlLinkExtractor(
            allow=(r'.*',),
            restrict_xpaths=('//*[contains(concat(" ", normalize-space(@class), " "), " on ")]',) #This is the XPath that cycles through pages
        ),
        callback='parse_item',
        follow=True
    ),
)

    def parse_item(self, response):
        self.log('parse_item called for: %s' % response.url, level=log.INFO)
        hxs = HtmlXPathSelector(response)
        item = RealtorSpiderItem()
        item['name'] = hxs.select('//*[contains(concat(" ", normalize-space(@class), " "), " screenname ")]').extract()
        item['link'] = hxs.select('@href').extract()
        item['city'] = hxs.select('//*[contains(concat(" ", normalize-space(@class), " "), " locality ")]').extract()

        return item

1 个答案:

答案 0 :(得分:0)

在抓取蜘蛛中,您使用规则查找start_urls中的网页,并在每次匹配时触发parse_item()

我想你想这样做:

rules = (
    Rule(
        SgmlLinkExtractor(
            restrict_xpaths=('//div/div/div/span/a'), 
            callback='parse_item')
    ),
)

因此,只有一条规则可以在start_urls页面内查找链接,并且每次匹配都会parse_item()

请参阅CrawlSpider example