在Scrapy

时间:2018-04-01 17:42:10

标签: python scrapy scrapy-spider

为了在一个非常大的项目上节省时间和重复代码的行,我一直在尝试从单个类定义中实例化Scrapy中的多个蜘蛛。我没有在文档中发现这是一种标准做法,但我也没有发现任何迹象表明它不能或不应该做。但是,它不起作用。以下是我正在尝试的内容:

from scrapy.spider import CrawlSpider

class ExampleSpider(CrawlSpider):

    def __init__(self, name, source, allowed_domains, starturls):
        self.name = name
        self.custom_settings = {'LOG_FILE':'logs/' + name + '.txt' }
        self.source = source
        self.allowed_domains = allowed_domains   
        self.start_urls = starturls
        self.rules = ( Rule(LinkExtractor(allow=''), callback='parse_item', follow=True),) 

     def parse_item(self, response):
        # do stuff here

SpiderInstance = ExampleSpider (
  'columbus',
  'Columbus Symphony',
  'columbussymphony.com',
 [ 'http://www.columbussymphony.com/events/'],  
) 

我得到的错误是:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.6/bin/scrapy", line 11, in <module>
sys.exit(execute())
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/cmdline.py", line 150, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/cmdline.py", line 90, in _run_print_help
func(*a, **kw)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/cmdline.py", line 157, in _run_command
cmd.run(args, opts)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/crawler.py", line 170, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/crawler.py", line 198, in create_crawler
return self._create_crawler(crawler_or_spidercls)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/crawler.py", line 202, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/spiderloader.py", line 71, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: columbus'

是否有可能以这种方式使用Scrapy,如果是这样,我做错了什么?

2 个答案:

答案 0 :(得分:2)

1

scrapy查找蜘蛛,而不是实例

您的代码中ExampleSpider,而SpiderInstance实例

你可能需要做这样的事情:

class ColumbusSpider(ExampleSpider):
    name = 'columbus'
    source = 'Columbus Symphony'
    allowed_domains = ['columbussymphony.com']
    start_urls = ['http://www.columbussymphony.com/events/']

2

还值得注意的是,蜘蛛的allowed_domains属性应包含列表,元组或域集。在您的示例代码中,它是一个字符串。

3

您可以使ExampleSpider成为元类,而不是如#1中所示继承ExampleSpider。因此,实例化ExampleSpider会为您带来一个类,而不是类实例。

答案 1 :(得分:0)

在阅读@ starrify的答案之后,我还没有找到一个简单的解决方案:

def class_factory(passed_name, passed_source, passed_allowed_domains, passed_start_urls):

   class ColumbusSpider(ExampleSpider):
       name = passed_name
       source = passed_source
       allowed_domains = passed_allowed_domains   
       start_urls = passed_start_urls
       # ... other stuff
       def parse_item(self, response):
           # use any other passed parameters as needed

   return ColumbusSpider

columbus = class_factory ( 
'columbustest',
'Columbus Symphony',
['columbussymphony.com'], 
[ 'http://www.columbussymphony.com/events/'],    
) # use as many times as needed