Scrapy:根据下载图像的URL,从下载的图像中创建文件夹结构

时间:2012-10-18 14:09:47

标签: python scrapy

我有一系列定义网站结构的链接。从这些链接下载图像时,我想同时将下载的图像放在类似于网站结构的文件夹结构中,而不是只重命名(如Scrapy image download how to use custom filename中所述)

我的代码是这样的:

class MyImagesPipeline(ImagesPipeline):
    """Custom image pipeline to rename images as they are being downloaded"""
    page_url=None
    def image_key(self, url):
        page_url=self.page_url
        image_guid = url.split('/')[-1]
        return '%s/%s/%s' % (page_url,image_guid.split('_')[0],image_guid)

    def get_media_requests(self, item, info):
        #http://store.abc.com/b/n/s/m
        os.system('mkdir '+item['sku'][0].encode('ascii','ignore'))
        self.page_url = urlparse(item['start_url']).path #I store the parent page's url in start_url Field
        for image_url in item['image_urls']:
            yield Request(image_url)

它会创建所需的文件夹结构,但是当我进入deapth中的文件夹时,我发现这些文件在文件夹中放错了位置。

我怀疑它正在发生,因为“get_media_requests”和“image_key”函数可能是异步执行的,因此“page_url”的值在“image_key”函数使用之前会发生变化。

1 个答案:

答案 0 :(得分:1)

异步项处理阻止在管道中通过self使用类变量,这是绝对正确的。您必须在每个请求中存储路径并覆盖更多方法(未经测试):

def image_key(self, url, page_url):
    image_guid = url.split('/')[-1]
    return '%s/%s/%s' % (page_url, image_guid.split('_')[0], image_guid)

def get_media_requests(self, item, info):
    for image_url in item['image_urls']:
        yield Request(image_url, meta=dict(page_url=urlparse(item['start_url']).path))

def get_images(self, response, request, info):
    key = self.image_key(request.url, request.meta.get('page_url'))
    ...

def media_to_download(self, request, info):
    ...
    key = self.image_key(request.url, request.meta.get('page_url'))
    ...

def media_downloaded(self, response, request, info):
    ...
    try:
        key = self.image_key(request.url, request.meta.get('page_url'))
    ...