无法从Scrapy python框架中找到下载的图像

时间:2016-01-23 08:28:52

标签: python web-scraping scrapy

我正在制作一个4chan刮刀来从线程下载图像。 一切都很好,我能够刮掉图像链接。 图像管道似乎也在起作用,这是我得到的响应。

thisisppn@thisisppn-HP-15-Notebook-PC:~/Work/ScrapyTests/FourChan/FourChan$ sudo scrapy crawl imageSpider
2016-01-23 13:45:31 [scrapy] INFO: Scrapy 1.0.4 started (bot: FourChan)
2016-01-23 13:45:31 [scrapy] INFO: Optional features available: ssl, http11
2016-01-23 13:45:31 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'FourChan.spiders', 'SPIDER_MODULES': ['FourChan.spiders'], 'BOT_NAME': 'FourChan'}
2016-01-23 13:45:31 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-01-23 13:45:32 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-01-23 13:45:32 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-01-23 13:45:32 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2016-01-23 13:45:32 [scrapy] INFO: Spider opened
2016-01-23 13:45:32 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-01-23 13:45:32 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-01-23 13:45:33 [scrapy] DEBUG: Crawled (200) <GET http://boards.4chan.org/a/thread/136492097/would-you-still-stay-with-you-waifu-if-she-was> (referer: None)
2016-01-23 13:45:33 [scrapy] DEBUG: File (uptodate): Downloaded image from <GET http://i.4cdn.org/a/1453536312441.jpg> referred in <None>
2016-01-23 13:45:33 [scrapy] DEBUG: Scraped from <200 http://boards.4chan.org/a/thread/136492097/would-you-still-stay-with-you-waifu-if-she-was>
{'fileName': u'1453536312441.jpg',
 'image_urls': [u'http://i.4cdn.org/a/1453536312441.jpg']}
2016-01-23 13:45:33 [scrapy] INFO: Closing spider (finished)
2016-01-23 13:45:33 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 279,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 5639,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'file_count': 1,
 'file_status_count/uptodate': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 1, 23, 8, 15, 33, 468893),
 'item_scraped_count': 1,
 'log_count/DEBUG': 4,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 1, 23, 8, 15, 32, 44913)}
2016-01-23 13:45:33 [scrapy] INFO: Spider closed (finished)

以下是settings.py文件

中的图像管道设置
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
IMAGES_STORE = '/Work/ScrapyTests/FourChan/FourChan/downloads/'

在进程关闭后,我会在给定目录中检查图像是否已下载,但该文件夹为空。 有人请看看。

以下是参考的主要蜘蛛代码

import scrapy
import FourChan.items as items
class DmozSpider(scrapy.Spider):
    name = "imageSpider"
    allowed_domains = ["4chan.org"]
    # thread = raw_input("Enter thread link: ")
    start_urls = [
        # thread
        "http://boards.4chan.org/a/thread/136492097/would-you-still-stay-with-you-waifu-if-she-was",
    ]


    def parse(self, response):
        page = response
        item = items.FourchanItem()
        testCss = page.selector.css('.fileThumb::attr(href)').extract()
        for test in testCss:
            url = "http:"+test
            # url = test[2:]
            filename = url.split("/")[-1]
            # print filename
            item['fileName'] = filename
            item['image_urls'] = [url] #This needs to be a list, in order for the image pipeline to work, can't be a string
            yield item

1 个答案:

答案 0 :(得分:2)

这真是愚蠢,错误在于

IMAGES_STORE = '/Work/ScrapyTests/FourChan/FourChan/downloads/'

应该是哪个 IMAGES_STORE = 'Work/ScrapyTests/FourChan/FourChan/downloads/'

不需要开始正斜杠。