我在不知道图片文件名或跟踪它的响应网址的情况下多次收到以下错误:
2012-08-20 08:14:34+0000 [spider] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 545, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 362, in callback
self._startRunCallbacks(result)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 458, in _startRunCallbacks
self._runCallbacks()
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 545, in _runCallbacks
current.result = callback(current.result, *args, **kw)
--- <exception caught here> ---
File "/usr/lib/pymodules/python2.7/scrapy/contrib/pipeline/images.py", line 204, in media_downloaded
checksum = self.image_downloaded(response, request, info)
File "/usr/lib/pymodules/python2.7/scrapy/contrib/pipeline/images.py", line 252, in image_downloaded
for key, image, buf in self.get_images(response, request, info):
File "/usr/lib/pymodules/python2.7/scrapy/contrib/pipeline/images.py", line 261, in get_images
orig_image = Image.open(StringIO(response.body))
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1980, in open
raise IOError("cannot identify image file")
exceptions.IOError: cannot identify image file
那么,我怎么能解决这个问题呢?因为我已经在settings.py
中定义了一定数量的错误后停止了我的蜘蛛答案 0 :(得分:3)
违规行在scrapy.contrib.pipelines.images.ImagesPipeline中使用PIL Image.open()
:
def get_images(self, response, request, info):
key = self.image_key(request.url)
orig_image = Image.open(StringIO(response.body))
media_downloaded()中的try块捕获了这个但发出错误:
except Exception:
log.err(spider=info.spider)
您可以通过以下方式破解此文件:
try:
key = self.image_key(request.url)
checksum = self.image_downloaded(response, request, info)
except ImageException, ex:
log.msg(str(ex), level=log.WARNING, spider=info.spider)
raise
except IOError, ex:
log.msg(str(ex), level=log.WARNING, spider=info.spider)
raise ImageException
except Exception:
log.err(spider=info.spider)
raise ImageException
但更好的选择是创建自己的管道并覆盖pipelines.py文件中的image_downloaded()方法:
from scrapy import log
from scrapy.contrib.pipeline.images import ImagesPipeline
class BkamImagesPipeline(ImagesPipeline):
def image_downloaded(self, response, request, info):
try:
super(BkamImagesPipeline, self).image_downloaded(response, request, info)
except IOError, ex:
log.msg(str(ex), level=log.WARNING, spider=info.spider)
请务必在设置文件中声明此管道:
ITEM_PIPELINES = [
'bkam.pipelines.BkamImagesPipeline',
]