我是scrapy的新手,我正在尝试从网站中提取数据。我相信我有一个逻辑错误,因为我的蜘蛛爬行页面,但它不会返回任何刮取的数据我会非常感激!
rules = (
Rule(
SgmlLinkExtractor(
allow=(r'.*',),
restrict_xpaths=('//div/div/div/span/a',) #This is the XPath for profiles links that direct to individual pages
),
callback='parse_item',
follow=True
),
Rule(
SgmlLinkExtractor(
allow=(r'.*',),
restrict_xpaths=('//*[contains(concat(" ", normalize-space(@class), " "), " on ")]',) #This is the XPath that cycles through pages
),
callback='parse_item',
follow=True
),
)
def parse_item(self, response):
self.log('parse_item called for: %s' % response.url, level=log.INFO)
hxs = HtmlXPathSelector(response)
item = RealtorSpiderItem()
item['name'] = hxs.select('//*[contains(concat(" ", normalize-space(@class), " "), " screenname ")]').extract()
item['link'] = hxs.select('@href').extract()
item['city'] = hxs.select('//*[contains(concat(" ", normalize-space(@class), " "), " locality ")]').extract()
return item
答案 0 :(得分:0)
在抓取蜘蛛中,您使用规则查找start_urls
中的网页,并在每次匹配时触发parse_item()
。
我想你想这样做:
rules = (
Rule(
SgmlLinkExtractor(
restrict_xpaths=('//div/div/div/span/a'),
callback='parse_item')
),
)
因此,只有一条规则可以在start_urls
页面内查找链接,并且每次匹配都会parse_item()
。