我正在尝试做的是在scrapy蜘蛛被打开时触发一个函数(abc),这可以由scrapys'信号'触发。
(稍后我想将其更改为'关闭'以将每个蜘蛛的统计数据保存到数据库中以进行日常监控。) 所以现在我尝试这个简单的解决方案只是为了打印出来的东西,当我在蜘蛛打开的那一刻运行爬虫过程时,我期望在控制台中看到什么。
爬行程序运行正常但是在蜘蛛开启时不会打印'abc'的输出会触发输出。最后,我发布了在控制台中看到的内容,这就是蜘蛛运行得非常好。
为什么abc功能不是由日志中的“INFO:Spider opened”信号触发的信号(或根本没有)?
MyCrawlerProcess.py:
from twisted.internet import reactor
from scrapy import signals
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
process = CrawlerProcess(get_project_settings())
def abc():
print '######################works!######################'
def from_crawler(crawler):
crawler.signals.connect(abc, signal=signals.spider_opened)
process.crawl('Dissident')
process.start() # the script will block here until the crawling is finished
控制台输出:
2016-03-17 13:00:14 [scrapy] INFO: Scrapy 1.0.4 started (bot: Chrome 41.0.2227.1. Mozilla/5.0 (Macintosh; Intel Mac Osource)
2016-03-17 13:00:14 [scrapy] INFO: Optional features available: ssl, http11
2016-03-17 13:00:14 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapytry.spiders', 'SPIDER_MODULES': ['scrapytry.spiders'], 'DOWNLOAD_DELAY': 5, 'BOT_NAME': 'Chrome 41.0.2227.1. Mozilla/5.0 (Macintosh; Intel Mac Osource'}
2016-03-17 13:00:14 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-03-17 13:00:14 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-03-17 13:00:14 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-03-17 13:00:14 [scrapy] INFO: Enabled item pipelines: ImagesPipeline, FilesPipeline, ScrapytryPipeline
2016-03-17 13:00:14 [scrapy] INFO: Spider opened
2016-03-17 13:00:14 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-03-17 13:00:14 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-03-17 13:00:14 [scrapy] DEBUG: Crawled (200) <GET http://www.xyz.zzm/> (referer: None)
答案 0 :(得分:0)
仅仅定义from_crawler
是不够的,因为它没有被连接到scrapy框架中。看看the docs here,它展示了如何创建一个完全符合您尝试的扩展。请务必按照MYEXT_ENABLED
设置启用扩展程序的说明进行操作。