为什么Scrapy扩展类的成员变量在spider中不可见但可以通过管道访问?

时间:2014-06-06 10:54:25

标签: python web-scraping scrapy scrapy-spider

我在scrapy中创建了扩展来设置公共路径变量和其他东西。因此,如果输出路径发生变化,则只需要修改一个文件。但我无法在蜘蛛内部访问该路径。

以下是扩展代码。

import datetime,re,os,random
from scrapy import signals
from scrapy.spider import Spider
from scrapy.conf import settings

class Common(object):
    output_dir = ''

    @classmethod
    def from_crawler(cls, crawler):
        settings = crawler.settings

        if settings['DATE']:
            cls.output_dir = 'output/' + settings['DATE'] + '/'
        else:
            cls.output_dir = 'output/' + datetime.date.today().strftime('%Y-%m-%d') + '/'

以上设置启用了以上扩展程序

EXTENSIONS = {'scrapyproject.common.Common':500,}

我的蜘蛛代码如下

from scrapyproject.spiderCommon import *

class dmozSpider(CrawlSpider):
    name = 'dmozSpider'
    allowed_domains = ['www.dmoz.org']
    start_urls = ['http://www.dmoz.org']

    rules = (
        Rule(SgmlLinkExtractor(allow=(),), callback='parse_item', follow=True),
    )

    def __init__(self, *a, **kw):
        super(dmozSpider, self).__init__(self, *a, **kw)
        dispatcher.connect(self.my_spider_opened, signals.spider_opened)

    def parse_item(self, response):
        sel = Selector(response)

        vifUrls = sel.xpath('//ul[@class="directory dir-col"]/li/a/@href').extract()
        with open(Common.output_dir + self.name + '.csv', 'a') as f:
            for vifUrl in vifUrls:
                print vifUrl
                f.write("%s\n" % vifUrl)
        pass

    def my_spider_opened(self, spider):
        fo = open(Common.output_dir + self.name + '.csv', "w+")
        fo.truncate()
        fo.close()

其中spiderCommon文件包含以下内容

from scrapyproject.common import *
from scrapy.selector import Selector
from scrapy.xlib.pydispatch import dispatcher
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

在蜘蛛内部无法访问Common.output_dir的值,但我可以在管道内访问它。

from scrapyproject.common import *

class XmlExportPipeline(object):
    def __init__(self, **kwargs):
        self.file_count = 1

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        print Common.output_dir

    def spider_closed(self, spider):
        self.file_count = self.file_count + 1

    def process_item(self, item, spider):
        return item

当我尝试在蜘蛛上方运行时,它停留在 [scrapy] DEBUG:Web服务监听0.0.0.0:6080 ,然后完成而不抓取任何链接。原因是它没有获得 Common.output_dir 的值 任何人都可以指出我哪里出错了?

0 个答案:

没有答案