我正在尝试运行蜘蛛但是有这个日志:
2015-05-15 12:44:43+0100 [scrapy] INFO: Scrapy 0.24.5 started (bot: reviews)
2015-05-15 12:44:43+0100 [scrapy] INFO: Optional features available: ssl, http11
2015-05-15 12:44:43+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'reviews.spiders', 'SPIDER_MODULES': ['reviews.spiders'], 'DOWNLOAD_DELAY': 2, 'BOT_NAME': 'reviews'}
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled item pipelines:
2015-05-15 12:44:43+0100 [theverge] INFO: Spider opened
2015-05-15 12:44:43+0100 [theverge] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-15 12:44:43+0100 [scrapy] ERROR: Error caught on signal handler: <bound method ?.start_listening of <scrapy.telnet.TelnetConsole instance at 0x105127b48>>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/core/engine.py", line 77, in start
yield self.signals.send_catch_log_deferred(signal=signals.engine_started)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/signalmanager.py", line 23, in send_catch_log_deferred
return signal.send_catch_log_deferred(*a, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/signal.py", line 53, in send_catch_log_deferred
*arguments, **named)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 140, in maybeDeferred
result = f(*args, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 54, in robustApply
return receiver(*arguments, **named)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/telnet.py", line 47, in start_listening
self.port = listen_tcp(self.portrange, self.host, self)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/reactor.py", line 14, in listen_tcp
return reactor.listenTCP(x, factory, interface=host)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 495, in listenTCP
p.startListening()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/tcp.py", line 984, in startListening
raise CannotListenError(self.interface, self.port, le)
twisted.internet.error.CannotListenError: Couldn't listen on 127.0.0.1:6073: [Errno 48] Address already in use.
这第一个错误开始出现在所有蜘蛛中,但其他蜘蛛无论如何都会起作用。 :“[Errno 48]地址已在使用中。” 然后来了:
2015-05-15 12:44:43+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6198
2015-05-15 12:44:44+0100 [theverge] DEBUG: Crawled (403) <GET http://www.theverge.com/reviews> (referer: None)
2015-05-15 12:44:44+0100 [theverge] DEBUG: Ignoring response <403 http://www.theverge.com/reviews>: HTTP status code is not handled or not allowed
2015-05-15 12:44:44+0100 [theverge] INFO: Closing spider (finished)
2015-05-15 12:44:44+0100 [theverge] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 191,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 265,
'downloader/response_count': 1,
'downloader/response_status_count/403': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 15, 11, 44, 44, 136026),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 5, 15, 11, 44, 43, 829689)}
2015-05-15 12:44:44+0100 [theverge] INFO: Spider closed (finished)
2015-05-15 12:44:44+0100 [scrapy] ERROR: Error caught on signal handler: <bound method ?.stop_listening of <scrapy.telnet.TelnetConsole instance at 0x105127b48>>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/core/engine.py", line 300, in _finish_stopping_engine
yield self.signals.send_catch_log_deferred(signal=signals.engine_stopped)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/signalmanager.py", line 23, in send_catch_log_deferred
return signal.send_catch_log_deferred(*a, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/signal.py", line 53, in send_catch_log_deferred
*arguments, **named)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 140, in maybeDeferred
result = f(*args, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 54, in robustApply
return receiver(*arguments, **named)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/telnet.py", line 53, in stop_listening
self.port.stopListening()
exceptions.AttributeError: TelnetConsole instance has no attribute 'port'
错误“exceptions.AttributeError:TelnetConsole实例没有属性'port'”对我来说是新的...不知道发生了什么,因为我的其他蜘蛛到其他网站都运行良好。
谁能告诉我如何解决?
编辑:
重启后,这个错误就消失了。但仍然无法抓住这只蜘蛛......现在这里的日志:
2015-05-15 15:46:55+0100 [scrapy] INFO: Scrapy 0.24.5 started (bot: reviews)piders_toshub/reviews (spiderDev) $ scrapy crawl theverge
2015-05-15 15:46:55+0100 [scrapy] INFO: Optional features available: ssl, http11
2015-05-15 15:46:55+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'reviews.spiders', 'SPIDER_MODULES': ['reviews.spiders'], 'DOWNLOAD_DELAY': 2, 'BOT_NAME': 'reviews'}
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled item pipelines:
2015-05-15 15:46:55+0100 [theverge] INFO: Spider opened
2015-05-15 15:46:55+0100 [theverge] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-15 15:46:55+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-05-15 15:46:55+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-05-15 15:46:56+0100 [theverge] DEBUG: Crawled (403) <GET http://www.theverge.com/reviews> (referer: None)
2015-05-15 15:46:56+0100 [theverge] DEBUG: Ignoring response <403 http://www.theverge.com/reviews>: HTTP status code is not handled or not allowed
2015-05-15 15:46:56+0100 [theverge] INFO: Closing spider (finished)
2015-05-15 15:46:56+0100 [theverge] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 191,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 265,
'downloader/response_count': 1,
'downloader/response_status_count/403': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 15, 14, 46, 56, 8769),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 5, 15, 14, 46, 55, 673723)}
2015-05-15 15:46:56+0100 [theverge] INFO: Spider closed (finished)
this“2015-05-15 15:46:56 + 0100 [theverge] DEBUG:忽略响应&lt; 403 http://www.theverge.com/reviews&gt;:HTTP状态代码未被处理或不被允许”因为我很奇怪使用download_delay = 2和上周我可以抓住该网站没有问题......可能会发生什么?
答案 0 :(得分:4)
Address already in use
会建议其他人正在侦听该端口,很可能是你并行运行另一个蜘蛛?第二个错误只是第一个错误的结果,因为它没有正确地实例化端口,现在它无法找到它来关闭它。
我建议重新启动以确保没有使用任何端口,并且只运行一个蜘蛛以查看它是否正常工作。如果再次发生,您可以使用netstat
或类似工具调查哪个应用程序正在使用该端口。
更新:HTTP错误 403 Forbidden 很可能意味着您已被网站禁止发出过多请求。要解决此问题,请使用代理服务器。结帐Scrapy HttpProxyMiddleware。
答案 1 :(得分:2)
修改项目中的settings.py文件可能对403错误有帮助:
DOWNLOADER_MIDDLEWARES = {'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,}