在同一页面上获得两张照片的Scrapy蜘蛛然后以不同的方式命名它们

时间:2016-03-08 21:39:15

标签: python scrapy scrapy-spider scrapy-pipeline

我是Python和Scrapy的新手,所以我不确定我选择了最好的方法来做到这一点;但我的目标是在一个页面上获得两个(或更多)不同的图片,并以不同的方式命名图片。

如果我使用组合管道或分离的管道,我应该如何设置管道?现在我尝试了分离的管道,但无法使其工作。第一张图片完美下载和重命名,但第二张图片完全没有下载(下面的错误信息)。

我正在此页面练习:http://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING

allabolagspider.py

class allabolagspider(CrawlSpider):
name="allabolagspider"
# allowed_domains = ["byralistan.se"]
start_urls = [
    "http://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING"
]

pipelines = ['AllabolagPipeline', 'AllabolagPipeline2']

rules = (
    Rule(LinkExtractor(allow = "http://www.allabolag.se/2321000016/STOCKHOLMS_LANS_LANDSTING"), callback='parse_link'),
)

def parse_link(self, response):
    for sel in response.xpath('//*[@class="reportTable"]'):#//TODO==king it seems that IMDB has changed the html structure for these information
        image = AllabolagItem()
                    tmptitle = response.xpath('''.//tr[2]/td[2]/table//tr[13]/td/span/text()''').extract()
                    tmptitle.insert(0, "logo-")
                    image['title'] = ["".join(tmptitle)]
                    rel = response.xpath('''.//tr[5]/td[2]/div[1]/div/a/img/@src''').extract()
                    image['image_urls'] = [urljoin(response.url, rel[0])]
                    yield image

    for sel in response.xpath('//*[@class="mainWindow"]'):#//TODO==king it seems that IMDB has changed the html structure for these information
        image2 = AllabolagItem()
                    tmptitle2 = response.xpath('''./div[2]/div[1]/ul/li[6]/a/text()''').extract()
                    tmptitle2.insert(0, "hej-")
                    image2['title2'] = ["".join(tmptitle2)]
                    rel2 = response.xpath('''./div[3]/div[1]/a/img/@src''').extract()
                    image2['image_urls2'] = [urljoin(response.url, rel2[0])]
                    yield image2

settings.py

BOT_NAME = 'allabolag'

SPIDER_MODULES = ['allabolag.spiders']
NEWSPIDER_MODULE = 'allabolag.spiders'

DOWNLOAD_DELAY = 2.5
CONCURRENT_REQUESTS = 250

USER_AGENT = "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"

ITEM_PIPELINES = {'allabolag.pipelines.AllabolagPipeline': 1,
'allabolag.pipelines.AllabolagPipeline2': 2,
}

IMAGES_STORE = 'Imagesfolder'

pipelines.py

import scrapy
from scrapy.pipelines.images import ImagesPipeline
import sqlite3 as lite
from allabolag import settings
from allabolag import items
con = None

class AllabolagPipeline(ImagesPipeline):
    def set_filename(self, response):
        return 'full/{0}.jpg'.format(response.meta['title'][0])

    def get_media_requests(self, item, info):
        for image_url in item['image_urls']:
            yield scrapy.Request(image_url, meta={'title': item['title']})

    def get_images(self, response, request, info):
        for key, image, buf in super(AllabolagPipeline, self).get_images(response, request, info):
            key = self.set_filename(response)
        yield key, image, buf

class AllabolagPipeline2(ImagesPipeline):
    def set_filename(self, response):
        return 'full/{0}.jpg'.format(response.meta['title2'][0])

    def get_media_requests(self, item, info):
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})

    def get_images(self, response, request, info):
        for key, image, buf in super(AllabolagPipeline2, self).get_images(response, request, info):
            key = self.set_filename2(response)
        yield key, image, buf

从终端复制粘贴

2016-03-08 22:15:58 [scrapy] ERROR: Error processing {'image_urls': [u'http://www.allabolag.se/img/prv/2798135.JPG'],
 'images': [{'checksum': 'a567ec7c2bd99fd7eb20db42229a1bf9',
             'path': 'full/0280bf8228087cd571e86f43859552f9880e558a.jpg',
             'url': 'http://www.allabolag.se/img/prv/2798135.JPG'}],
 'title': [u'logo-UTDELNINGSADRESS']}
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-15.5.0-py2.7-macosx-10.6-intel.egg/twisted/internet/defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-1.0.3-py2.7.egg/scrapy/pipelines/media.py", line 45, in process_item
dlist = [self._process_request(r, info) for r in requests]
  File "/Users/VickieB/Documents/Scrapy/Test1/tutorial/tandlakare/allabolag/pipelines.py", line 36, in get_media_requests
for image_url2 in item['image_urls2']:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-1.0.3-py2.7.egg/scrapy/item.py", line 56, in __getitem__
return self._values[key]
KeyError: 'image_urls2'

1 个答案:

答案 0 :(得分:1)

可能有几个我没有注意到的错误,但我可以解释其中一个... KeyError通常表示字典查找失败。在这种情况下,这意味着,在执行期间的某个时刻,您将item(字典)传递给没有密钥的def get_media_requests(self, item, info):" image_urls2&# 34;

get_media_requests更改为此将显示何时并且应该允许脚本继续执行。

def get_media_requests(self, item, info):
    if "image_urls2" not in item:
        print("ERROR - 'image_urls2' NOT IN ITEM/DICT")
    else:
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})

如果您懒惰或不关心一些缺失值,您可以将整个内容包含在try/except中,如下所示:

def get_media_requests(self, item, info):
    try:
        for image_url2 in item['image_urls2']:
            yield scrapy.Request(image_url2, meta={'title2': item['title2']})
    except Exception as e:
        print(str(e))