Scrapy不下载图片

时间:2017-07-11 06:57:16

标签: scrapy

我正试图通过scrapy从不同的网址下载图片。我是python和scrapy的新手,所以也许我错过了一些明显的东西。这是我关于堆栈溢出的第一篇文章。非常感谢帮助!

以下是我的不同文件:

items.py

# -*- coding: utf-8 -*-
import scrapy
class PicscrapyItem(scrapy.Item):
image_urls = scrapy.Field()
images = scrapy.Field()

pipelines.py

class PicscrapyPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
    for url in item['image_urls']:
        if re.match(r'https', url):
            yield Request(url)

def file_path(self, request, response=None, info=None):
    if not isinstance(request, Request):
        url = request
    else:
        url = request.url
    image_guid = hashlib.sha1(to_bytes(url)).hexdigest()  # change to request.url after deprecation
    return '%s.jpg' % image_guid

settings.py

BOT_NAME = 'picScrapy'
SPIDER_MODULES = ['picScrapy.spiders']
NEWSPIDER_MODULE = 'picScrapy.spiders'
DEPTH_LIMIT = 3
IMAGES_STORE = 'F:/00'
IMAGES_MIN_WIDTH = 500
IMAGES_MIN_HEIGHT = 500
ROBOTSTXT_OBEY = False
LOG_FILE = "log"

pic.py

from urlparse import urljoin
from scrapy.spiders import Spider
from scrapy.http import Request
from picScrapy.items import PicscrapyItem


class PicSpider(Spider):
    name = "pic"  # 定义爬虫名
    start_url = 'https://s.taobao.com'  # 爬虫入口
    headers = {
        'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
                  '(KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',
    }

def start_requests(self):
    for i in range(1, 2):
        # url = 'http://www.win4000.com/wallpaper_2285_0_10_%d.html' % i
        url = 'https://s.taobao.com/list?spm=a217f.8051907.312344.1.353deac38xy87V&q=' \
              '%E8%BF%9E%E8%A1%A3%E8%A3%99&style=' \
              'grid&seller_type=taobao&cps=yes&cat=51108009&bcoffset=12&s='+str(60*i)
        yield Request(url, headers=self.headers)

def parse(self, response):
    item = PicscrapyItem()
    item['image_urls'] = response.xpath('//img/@data-src').extract()
    yield item

    all_urls = response.xpath('//img/@src').extract()
    for url in all_urls:
        url = urljoin(self.start_url, url)
        yield Request(url, callback=self.parse)

日志

2017-07-11 14:28:25 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: picScrapy)
2017-07-11 14:28:25 [scrapy.utils.log] INFO: Overridden settings:     {'NEWSPIDER_MODULE': 'picScrapy.spiders', 'SPIDER_MODULES':     ['picScrapy.spiders'], 'LOG_FILE': 'log', 'DEPTH_LIMIT': 3, 'BOT_NAME':     'picScrapy'}
2017-07-11 14:28:25 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-07-11 14:28:25 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-07-11 14:28:25 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-07-11 14:28:25 [scrapy.middleware] INFO: Enabled item pipelines:
['picScrapy.pipelines.PicscrapyPipeline']
2017-07-11 14:28:25 [scrapy.core.engine] INFO: Spider opened
2017-07-11 14:28:25 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-11 14:28:25 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-07-11 14:28:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET    https://s.taobao.com/list?   spm=a217f.8051907.312344.1.353deac38xy87V&q=%E8%BF%9E%E8%A1%A3%E8%A3%99&style=gr    id&seller_type=taobao&cps=yes&cat=51108009&bcoffset=12&s=60> (referer: None)
2017-07-11 14:28:26 [scrapy.core.scraper] DEBUG: Scraped from <200         https://s.taobao.com/list?    spm=a217f.8051907.312344.1.353deac38xy87V&q=%E8%BF%9E%E8%A1%A3%E8%A3%99&style=gr    id&seller_type=taobao&cps=yes&cat=51108009&bcoffset=12&s=60>
{'image_urls': [], 'images': []}
2017-07-11 14:28:26 [scrapy.core.engine] INFO: Closing spider (finished)
2017-07-11 14:28:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 426,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 37638,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 7, 11, 6, 28, 26, 395000),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 7, 11, 6, 28, 25, 778000)}
2017-07-11 14:28:26 [scrapy.core.engine] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:3)

您需要在 settings.py 文件中启用管道。如果您想使用 scrapy管道,请将其添加到您的设置中:

ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}

如果您想使用自定义管道(例如 pipelines.py 中的管道),可以在设置中添加:

ITEM_PIPELINES = {'[directory].pipelines.PicscrapyPipeline': 1} 

其中[directory]是 pipelines.py 文件

的目录
相关问题