这是我的中间件:
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
from scrapy.exceptions import IgnoreRequest
from scrapy import log
class FilterURLs(object):
def process_response(self,request, response, spider):
if response.status == 301 :
return response
else:
headers = ['text/html; charset=UTF-8', 'text/html; charset=utf-8', 'text/html;charset=UTF-8', 'text/html;charset=utf-8','text/html;charset=ISO-8859-1','application/xhtml+xml; charset=utf-8']
log.msg("In Middleware " + repr(response.headers['Content-Type']), level=log.INFO)
for header in headers:
if response.headers['Content-Type'] != header:
raise IgnoreRequest()
else:
return response
我的错误:
2014-01-09 13:08:56+0530 [crawler] DEBUG: Redirecting (301) to <GET http://www.altria.com/Pages/default.aspx> from <GET http://www.altria.com>
2014-01-09 13:08:58+0530 [scrapy] INFO: In Middleware 'text/html; charset=utf-8'
2014-01-09 13:08:58+0530 [crawler] ERROR: Error downloading <GET http://www.altria.com/Pages/default.aspx>
Traceback (most recent call last):
我的刮刀因错误而停止。是否无法抓取重定向的链接? 是因为它无法获得重定向链接的内容类型吗?
答案 0 :(得分:3)
<强>更新强>:
经过第二次观察,我无法使用Scrapy 0.20重现您的错误。
中间件:
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
from scrapy.exceptions import IgnoreRequest
from scrapy import log
class FilterURLs(object):
def process_response(self,request, response, spider):
if response.status == 301 :
return response
else:
headers = ['text/html; charset=UTF-8', 'text/html; charset=utf-8', 'text/html;charset=UTF-8', 'text/html;charset=utf-8','text/html;charset=ISO-8859-1','application/xhtml+xml; charset=utf-8']
log.msg("In Middleware " + repr(response.headers['Content-Type']), level=log.INFO)
for header in headers:
if response.headers['Content-Type'] != header:
log.msg("Ignoring response %r" % request)
raise IgnoreRequest()
else:
return response
蜘蛛:
from scrapy.spider import BaseSpider
class MySpider(BaseSpider):
name = 'filtertest'
start_urls = ['http://www.altria.com']
def parse(self, response):
self.log(response.url)
设置:
DOWNLOADER_MIDDLEWARES = {
'mytest.dlmw.FilterURLs': 1,
}
scrapy crawl filtertest
的输出:
2014-01-10 10:05:27-0400 [scrapy] INFO: Scrapy 0.20.0 started (bot: pipetest)
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'pipetest.spiders', 'SPIDER_MODULES': ['pipetest.spiders'], 'BOT_NAME': 'pipetest'}
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Enabled downloader middlewares: FilterURLs, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Enabled item pipelines: MyPipeline
2014-01-10 10:05:27-0400 [filtertest] INFO: Spider opened
2014-01-10 10:05:27-0400 [filtertest] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-01-10 10:05:27-0400 [filtertest] DEBUG: [MyPipeline] Initializing resources for filtertest
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-01-10 10:05:27-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-01-10 10:05:28-0400 [filtertest] DEBUG: Redirecting (301) to <GET http://www.altria.com/Pages/default.aspx> from <GET http://www.altria.com>
2014-01-10 10:05:28-0400 [scrapy] INFO: In Middleware 'text/html; charset=utf-8'
2014-01-10 10:05:28-0400 [scrapy] INFO: Ignoring response <GET http://www.altria.com/Pages/default.aspx>
2014-01-10 10:05:28-0400 [filtertest] INFO: Closing spider (finished)
2014-01-10 10:05:28-0400 [filtertest] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 458,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 21358,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 1, 10, 14, 5, 28, 452610),
'log_count/DEBUG': 8,
'log_count/INFO': 5,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2014, 1, 10, 14, 5, 27, 748879)}
2014-01-10 10:05:28-0400 [filtertest] INFO: Spider closed (finished)
上一个回答:试试这个
allowed_headers = ['text/html; charset=UTF-8', 'text/html; charset=utf-8', 'text/html;charset=UTF-8', 'text/html;charset=utf-8','text/html;charset=ISO-8859-1','application/xhtml+xml; charset=utf-8']
log.msg("In Middleware " + repr(response.headers['Content-Type']), level=log.INFO)
if response.headers['Content-Type'] in allowed_headers:
return response
答案 1 :(得分:0)
我没有scrapy方面的经验,但是你的堆栈跟踪显示你明确地提出了IgnoreRequest()。如果您只想在不兼容的标题上引发异常,请替换for循环
for header in headers:
if response.headers['Content-Type'] != header:
raise IgnoreRequest()
else:
return response
与
if not response.headers['Content-Type'] in headers:
raise IgnoreRequest()
else:
return response