Scrapy Crawl Spider没有关注链接

时间:2014-12-03 22:24:32

标签: python-2.7 web-scraping scrapy screen-scraping

所以我写了一个网络爬虫来从walmart.com中提取食物。这是我的蜘蛛。我似乎无法弄清楚为什么它不遵循左边的链接,直到。它拉出主页然后完成。

我的目标是让它跟随左侧弹出栏上的所有链接,然后从这些页面中提取每个食物项目。

我甚至尝试使用allow =(),以便它跟随页面上的每个链接,但仍然无效。

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.loader import XPathItemLoader
from scrapy.contrib.loader.processor import Join, MapCompose
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor as sle
from walmart_scraper.items import GroceryItem


class WalmartFoodSpider(CrawlSpider):
    name = "walmart_scraper"
    allowed_domains = ["www.walmart.com"]
    start_urls = ["http://www.walmart.com/cp/976759"]
    rules = (Rule(sle(restrict_xpaths=('//div[@class="lhn-menu-flyout-inner lhn-menu-flyout-2col"]/ul[@class="block-list"]/li/a',)),callback='parse',follow=True),)

    items_list_xpath = '//div[@class="js-tile tile-grid-unit"]'

item_fields = {'title': './/a[@class="js-product-title"]/h3[@class="tile-heading"]/div',
               'image_url': './/a[@class="js-product-image"]/img[@class="product-image"]/@src',
               'price': './/div[@class="tile-price"]/div[@class="item-price-            container"]/span[@class="price price-display"]|//div[@class="tile-price"]/div[@class="item-price-   container"]/span[@class="price price-display price-not-available"]',
               'category': '//nav[@id="breadcrumb-container"]/ol[@class="breadcrumb-list"]/li[@class="js-breadcrumb breadcrumb "][2]/a',
               'subcategory': '//nav[@id="breadcrumb-container"]/ol[@class="breadcrumb-list"]/li[@class="js-breadcrumb breadcrumb active"]/a',
               'url': './/a[@class="js-product-image"]/@href'}
def parse(self, response):

    selector = HtmlXPathSelector(response)

    # iterate over deals
    for item in selector.select(self.items_list_xpath):
        loader = XPathItemLoader(GroceryItem(), selector=item)

        # define processors
        loader.default_input_processor = MapCompose(unicode.strip)
        loader.default_output_processor = Join()

        # iterate over fields and add xpaths to the loader
        for field, xpath in self.item_fields.iteritems():
            loader.add_xpath(field, xpath)
        yield loader.load_item()

1 个答案:

答案 0 :(得分:5)

使用parse()时,您不应该覆盖CrawlSpider方法。您应该在callback中使用其他名称设置自定义Rule 以下是official documentation

的摘录
  

编写爬网蜘蛛规则时,请避免使用parse作为回调   CrawlSpider使用parse方法本身来实现其逻辑。   因此,如果您覆盖解析方法,则爬行蜘蛛将不再存在   工作