Scrapy start_urls

时间:2012-01-18 00:39:34

标签: python scrapy

来自The script教程的

this(下方)包含两个start_urls

from scrapy.spider import Spider
from scrapy.selector import Selector

from dirbot.items import Website

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]

    def parse(self, response):
        """
        The lines below is a spider contract. For more info see:
        http://doc.scrapy.org/en/latest/topics/contracts.html
        @url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
        @scrapes name
        """
        sel = Selector(response)
        sites = sel.xpath('//ul[@class="directory-url"]/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.xpath('a/text()').extract()
            item['url'] = site.xpath('a/@href').extract()
            item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
            items.append(item)

        return items

但为什么它只刮掉这2个网页?我看到allowed_domains = ["dmoz.org"]但这两个页面还包含指向dmoz.org域内其他页面的链接!为什么它也不刮它们?

6 个答案:

答案 0 :(得分:15)

start_urls class属性包含start urls - 仅此而已。如果您已经提取了其他页面的网址,则需要使用[另一个]回调从parse回调相应的请求中获取:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...

如果您仍想自定义创建开始请求,请覆盖方法BaseSpider.start_requests()

答案 1 :(得分:6)

start_urls包含蜘蛛开始抓取的链接。 如果要递归爬网,则应使用crawlspider并为其定义规则。 http://doc.scrapy.org/en/latest/topics/spiders.html 以那里为例。

答案 2 :(得分:2)

该类没有rules属性。查看http://readthedocs.org/docs/scrapy/en/latest/intro/overview.html并搜索“规则”以查找示例。

答案 3 :(得分:2)

如果您在回调中使用BaseSpider,则必须自己提取所需的网址并返回Request个对象。

如果使用CrawlSpider,则链接提取将由规则和与规则关联的SgmlLinkExtractor处理。

答案 4 :(得分:1)

如果你使用规则来跟踪链接(已经在scrapy中实现),蜘蛛也会刮掉它们。我希望有所帮助...

    from scrapy.contrib.spiders import BaseSpider, Rule
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from scrapy.selector import HtmlXPathSelector


    class Spider(BaseSpider):
        name = 'my_spider'
        start_urls = ['http://www.domain.com/']
        allowed_domains = ['domain.com']
        rules = [Rule(SgmlLinkExtractor(allow=[], deny[]), follow=True)]

     ...

答案 5 :(得分:0)

你没有编写函数来处理你想要获取的url.so两种方式来重新解析。使用规则(crawlspider)2:编写处理新urls的函数。并将它们放入回调函数。