某些网站上的Scrapy超时

时间:2015-11-21 01:45:23

标签: python web-scraping scrapy

在我自己的机器上,我试过

> scrapy fetch http://google.com/ 

> scrapy fetch http://stackoverflow.com/ 

完美地工作,不知何故www.flyertalk.com在scrapy上表现不佳。我一直收到超时错误(180s):

> scrapy fetch http://www.flyertalk.com/ 

然而卷曲在没有打嗝的情况下工作正常

> curl -s http://www.flyertalk.com/ 

很奇怪。这是完整的转储:

2015-11-20 17:35:07 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-11-20 17:35:07 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-11-20 17:35:07 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-11-20 17:35:07 [scrapy] INFO: Enabled item pipelines: 
2015-11-20 17:35:07 [scrapy] INFO: Spider opened
2015-11-20 17:35:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:35:07 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6037
2015-11-20 17:36:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:37:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:38:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:38:07 [scrapy] DEBUG: Retrying <GET http://www.flyertalk.com> (failed 1 times): User timeout caused connection failure: Getting http://www.flyertalk.com took longer than 180.0 seconds..
2015-11-20 17:39:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:40:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:41:07 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-20 17:41:07 [scrapy] DEBUG: Retrying <GET http://www.flyertalk.com> (failed 2 times): User timeout caused connection failure: Getting http://www.flyertalk.com took longer than 180.0 seconds..

1 个答案:

答案 0 :(得分:1)

我做了一点实验。 USER-AGENT标题完全不同:

$ scrapy shell http://www.flyertalk.com/ -s USER_AGENT='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36'
In [1]: response.xpath("//title/text()").extract_first().strip()
Out[1]: u"FlyerTalk - The world's most popular frequent flyer community - FlyerTalk is a living, growing community where frequent travelers around the world come to exchange knowledge and experiences about everything miles and points related."

如果没有指定标题,我会看到它永远悬空。