当我运行scrapy crawl myspider
时,它有时会运行,但如果我运行相同的命令几次,它通常会起作用。我之间没有改变任何东西。不一致使我相信服务器可能是问题(302重定向与200),但我很好奇,如果有人有办法处理这个问题。我目前正在使用CNAME抓取我在Github上托管的网站。这是蜘蛛关闭时我得到的日志:
2015-10-24 12:12:05 [scrapy] INFO: Scrapy 1.0.3 started (bot: myproject)
2015-10-24 12:12:05 [scrapy] INFO: Optional features available: ssl, http11
2015-10-24 12:12:05 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'myproject.spiders', 'SPIDER_MODULES': ['myproject.spiders'], 'BOT_NAME': 'myproject'}
2015-10-24 12:12:05 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState, AutoThrottle
2015-10-24 12:12:06 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-10-24 12:12:06 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-10-24 12:12:06 [scrapy] INFO: Enabled item pipelines:
2015-10-24 12:12:06 [scrapy] INFO: Spider opened
2015-10-24 12:12:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-10-24 12:12:06 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-10-24 12:12:06 [scrapy] DEBUG: Redirecting (302) to <GET http://my-website.com/> from <GET http://my-website.com>
2015-10-24 12:12:06 [scrapy] DEBUG: Filtered duplicate request: <GET http://my-website.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2015-10-24 12:12:06 [scrapy] INFO: Closing spider (finished)
2015-10-24 12:12:06 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 209,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 78,
'downloader/response_count': 1,
'downloader/response_status_count/302': 1,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 10, 24, 19, 12, 6, 307249),
'log_count/DEBUG': 3,
'log_count/INFO': 8,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 10, 24, 19, 12, 6, 45050)}
2015-10-24 12:12:06 [scrapy] INFO: Spider closed (finished)
这是成功的scrapy运行的开始部分:
2015-10-24 12:15:14 [scrapy] INFO: Scrapy 1.0.3 started (bot: myproject)
2015-10-24 12:15:14 [scrapy] INFO: Optional features available: ssl, http11
2015-10-24 12:15:14 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'myproject.spiders', 'SPIDER_MODULES': ['myproject.spiders'], 'BOT_NAME': 'myproject'}
2015-10-24 12:15:15 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState, AutoThrottle
2015-10-24 12:15:15 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-10-24 12:15:15 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-10-24 12:15:15 [scrapy] INFO: Enabled item pipelines:
2015-10-24 12:15:15 [scrapy] INFO: Spider opened
2015-10-24 12:15:15 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-10-24 12:15:15 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-10-24 12:15:15 [scrapy] DEBUG: Crawled (200) <GET http://my-website.com> (referer: None)
2015-10-24 12:15:15 [scrapy] DEBUG: Filtered duplicate request: <GET http://my-website.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
...