使用保存的html页面报废

时间:2018-11-09 10:03:59

标签: html web-scraping scrapy local scrapy-spider

我正在寻找一种对我保存在计算机上的html页面使用scrapy的方法。就我而言,我遇到了一个错误:

requests.exceptions.InvalidSchema: No connection adapters were found for 'file:///home/stage/Guillaume/scraper_test/mypage/details.html'

SPIDER_START_URLS = [“ file:///home/stage/Guillaume/scraper_test/mypage/details.html”]

1 个答案:

答案 0 :(得分:1)

使用request_fingerprint将现有的HTML文件注入HTTPCACHE_DIR(几乎总是.scrapy/httpcache/${spider_name})方面,我取得了巨大的成功。然后,打开前面提到的http cache middleware(默认为基于文件的缓存存储),以及“虚拟策略”(Dummy Policy),该策略认为磁盘上的文件具有权威性,并且如果在缓存中找到URL,则不会发出网络请求

我希望脚本会像这样(这只是一般性的想法,不能保证甚至可以运行):

import sys
from scrapy.extensions.httpcache import FilesystemCacheStorage
from scrapy.http import Request, HtmlResponse
from scrapy.settings import Settings

# this value is the actual URL from which the on-disk file was saved
# not the "file://" version
url = sys.argv[1]
html_filename = sys.argv[2]
with open(html_filename) as fh:
    html_bytes = fh.read()
req = Request(url=url)
resp = HtmlResponse(url=req.url, body=html_bytes, encoding='utf-8', request=req)
settings = Settings()
cache = FilesystemCacheStorage(settings)
spider = None  # fill in your Spider class here
cache.store_response(spider, req, resp)