scrapy crawler无法在主页上工作

时间:2014-04-26 04:27:20

标签: python html web-scraping web-crawler scrapy

我写了一个scrapy scrawler,试图收集http://www.shop.ginakdesigns.com/main.sc

上的项目
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector

from .. import items

class GinakSpider(CrawlSpider):
    name = "ginak"
    start_urls = [
   "http://www.shop.ginakdesigns.com/main.sc"
    ]
    rules = [Rule(SgmlLinkExtractor(allow=[r'category\.sc\?categoryId=\d+'])),
        Rule(SgmlLinkExtractor(allow=[r'product\.sc\?productId=\d+&categoryId=\d+']), callback='parse_item')]

def parse_item(self, response):
    sel = Selector(response)
    self.log(response.url)
    item = items.GinakItem()
    item['name'] = sel.xpath('//*[@id="wrapper2"]/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[1]/h2/text()').extract()
    item['price'] = sel.xpath('//*[@id="listPrice"]/text()').extract()
    item['description'] = sel.xpath('//*[@id="wrapper2"]/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[4]/div/p/text()').extract()
    item['category'] = sel.xpath('//*[@id="breadcrumbs"]/a[2]/text()').extract()

    return item

然而,它不会超出主页的任何链接。我已经尝试了各种各样的东西,并检查了我的正则表达式的SgmlLinkExtractor。这有什么不对吗?

1 个答案:

答案 0 :(得分:0)

问题是您尝试提取的链接中插入了jsessionid,例如:

<a href="/category.sc;jsessionid=EA2CAA7A3949F4E462BBF466E03755B7.m1plqscsfapp05?categoryId=16">

使用.*?非贪婪匹配对任何字符进行修复,而不是查找/?

rules = [Rule(SgmlLinkExtractor(allow=[r'category\.sc.*?categoryId=\d+']), callback='parse_item'),
         Rule(SgmlLinkExtractor(allow=[r'product\.sc.*?productId=\d+&categoryId=\d+']), callback='parse_item')]

希望有所帮助。