在Scrapy中初始化CrawlSpider

时间:2012-08-30 07:08:26

标签: web-scraping scrapy

我在Scrapy中写过蜘蛛,它基本上做得很好,完全按照它应该做的去做。 问题是我需要对它做一些小改动,我尝试了几种方法但没有成功(例如修改InitSpider)。以下是脚本应该做的事情:

  • 抓取开始网址http://www.example.de/index/search?method=simple
  • 现在进入网址http://www.example.de/index/search?filter=homepage
  • 使用规则
  • 中定义的模式从此处开始抓取

所以基本上所有需要改变的是在两者之间调用一个URL。我宁愿不用BaseSpider重写整个事情,所以我希望有人知道如何实现这个:)

如果您需要任何其他信息,请告知我们。您可以在下面找到当前的脚本。

#!/usr/bin/python
# -*- coding: utf-8 -*-

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from example.items import ExampleItem
from scrapy.contrib.loader.processor import TakeFirst
import re
import urllib

take_first = TakeFirst()

class ExampleSpider(CrawlSpider):
    name = "example"
    allowed_domains = ["example.de"]

    start_url = "http://www.example.de/index/search?method=simple"
    start_urls = [start_url]

    rules = (
        # http://www.example.de/index/search?page=2
        # http://www.example.de/index/search?page=1&tab=direct
        Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*$', )), callback='parse_item', follow=True),
        Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*&tab=direct', )), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # fetch all company entries
        companies = hxs.select("//ul[contains(@class, 'directresults')]/li[contains(@id, 'entry')]")
        items = []

        for company in companies:
            item = ExampleItem()
            item['name'] = take_first(company.select(".//span[@class='fn']/text()").extract())
            item['address'] = company.select(".//p[@class='data track']/text()").extract()
            item['website'] = take_first(company.select(".//p[@class='customurl track']/a/@href").extract())

            # we try to fetch the number directly from the page (only works for premium entries)
            item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/text()").extract())

            if not item['telephone']:
              # if we cannot fetch the number it has been encoded on the client and hidden in the rel=""
              item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/@rel").extract())

            items.append(item)
        return items

修改

以下是我对InitSpider的尝试:https://gist.github.com/150b30eaa97e0518673a 我从这里得到了这个想法:Crawling with an authenticated session in Scrapy

正如您所看到的,它仍然继承自CrawlSpider,但我对核心Scrapy文件做了一些更改(不是我最喜欢的方法)。我让CrawlSpider继承自InitSpider而不是BaseSpider(source)。

到目前为止这种方法有效,但蜘蛛只是在第一页后停止,而不是拾取所有其他页面。

此外,这种方法似乎对我来说绝对没有必要:)

1 个答案:

答案 0 :(得分:2)

好的,我自己找到了解决方案,它实际上比我最初想的要简单得多:)

以下是简化的脚本:

#!/usr/bin/python
# -*- coding: utf-8 -*-

from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy import log
from scrapy.selector import HtmlXPathSelector
from example.items import ExampleItem
from scrapy.contrib.loader.processor import TakeFirst
import re
import urllib

take_first = TakeFirst()

class ExampleSpider(BaseSpider):
    name = "ExampleNew"
    allowed_domains = ["www.example.de"]

    start_page = "http://www.example.de/index/search?method=simple"
    direct_page = "http://www.example.de/index/search?page=1&tab=direct"
    filter_page = "http://www.example.de/index/search?filter=homepage"

    def start_requests(self):
        """This function is called before crawling starts."""
        return [Request(url=self.start_page, callback=self.request_direct_tab)]

    def request_direct_tab(self, response):
        return [Request(url=self.direct_page, callback=self.request_filter)]

    def request_filter(self, response):
        return [Request(url=self.filter_page, callback=self.parse_item)]

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # fetch the items you need and yield them like this:
        # yield item

        # fetch the next pages to scrape
        for url in hxs.select("//div[@class='limiter']/a/@href").extract():
            absolute_url = "http://www.example.de" + url             
            yield Request(absolute_url, callback=self.parse_item)

正如您所看到的,我现在正在使用BaseSpider并在最后自己生成新的请求。在开始时,我只需简单地介绍在爬行开始之前需要进行的所有不同请求。

我希望这对某人有帮助:)如果您有疑问,我很乐意回答。