我是python和scrapy的新手。我使用此博客Running multiple scrapy spiders programmatically中的方法在一个烧瓶app中运行我的蜘蛛。这是代码:
# list of crawlers
TO_CRAWL = [DmozSpider, EPGDspider, GDSpider]
# crawlers that are running
RUNNING_CRAWLERS = []
def spider_closing(spider):
"""
Activates on spider closed signal
"""
log.msg("Spider closed: %s" % spider, level=log.INFO)
RUNNING_CRAWLERS.remove(spider)
if not RUNNING_CRAWLERS:
reactor.stop()
# start logger
log.start(loglevel=log.DEBUG)
# set up the crawler and start to crawl one spider at a time
for spider in TO_CRAWL:
settings = Settings()
# crawl responsibly
settings.set("USER_AGENT", "Kiran Koduru (+http://kirankoduru.github.io)")
crawler = Crawler(settings)
crawler_obj = spider()
RUNNING_CRAWLERS.append(crawler_obj)
# stop reactor when spider closes
crawler.signals.connect(spider_closing, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(crawler_obj)
crawler.start()
# blocks process; so always keep as the last statement
reactor.run()
这是我的蜘蛛代码:
class EPGDspider(scrapy.Spider):
name = "EPGD"
allowed_domains = ["epgd.biosino.org"]
term = "man"
start_urls = ["http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery="+term+"&submit=Feeling+Lucky"]
MONGODB_DB = name + "_" + term
MONGODB_COLLECTION = name + "_" + term
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')
url_list = []
base_url = "http://epgd.biosino.org/EPGD"
for site in sites:
item = EPGD()
item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
item['genID_url'] = base_url+map(unicode.strip, site.xpath('td[1]/a/@href').extract())[0][2:]
item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
item['taxID_url'] = map(unicode.strip, site.xpath('td[2]/a/@href').extract())
item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
item['familyID_url'] = base_url+map(unicode.strip, site.xpath('td[3]/a/@href').extract())[0][2:]
item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
yield item
sel_tmp = Selector(response)
link = sel_tmp.xpath('//span[@id="quickPage"]')
for site in link:
url_list.append(site.xpath('a/@href').extract())
for i in range(len(url_list[0])):
if cmp(url_list[0][i], "#") == 0:
if i+1 < len(url_list[0]):
print url_list[0][i+1]
actual_url = "http://epgd.biosino.org/EPGD/search/"+ url_list[0][i+1]
yield Request(actual_url, callback=self.parse)
break
else:
print "The index is out of range!"
如您所见,我的代码中有一个参数term = 'man'
,它是我start urls
的一部分。我不希望修复此参数,因此我想知道如何在程序中动态提供start url
或参数term
?就像在命令行中运行蜘蛛一样,有一种方法可以传递参数,如下所示:
class MySpider(BaseSpider):
name = 'my_spider'
def __init__(self, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
self.start_urls = [kwargs.get('start_url')]
And start it like: scrapy crawl my_spider -a start_url="http://some_url"
有谁能告诉我如何处理这件事?
答案 0 :(得分:8)
首先,要在脚本中运行多个蜘蛛,推荐的方法是使用scrapy.crawler.CrawlerProcess
,where you pass spider classes而不是蜘蛛实例。
要使用CrawlerProcess
将参数传递给您的蜘蛛,您只需要在蜘蛛子类之后将参数添加到.crawl()
调用,
e.g。
process.crawl(DmozSpider, term='someterm', someotherterm='anotherterm')
然后以这种方式传递的参数可用作蜘蛛属性(与命令行中的-a term=someterm
相同)
最后,不是在start_urls
中构建__init__
,而是可以使用start_requests
实现相同的目标,并且可以使用self.term
构建这样的初始请求:
def start_requests(self):
yield Request("http://epgd.biosino.org/"
"EPGD/search/textsearch.jsp?"
"textquery={}"
"&submit=Feeling+Lucky".format(self.term))