Scrapy-Spash 不爬行

时间:2021-06-27 23:21:48

标签: python scrapy scrapy-splash

我正在尝试对通过以下链接搜索公司年度报告而返回的链接进行非常基本的打印:https://www.mergentarchives.com/searchResults.php?searchType=annualReports&companyName=3Com+Corp.&compNumber=37958&aracompNumber=0

我需要使用 Splash 来呈现链接,因为这个网站是用 JavaScript 编写的,并且搜索结果是动态加载的。当我尝试打印出链接列表时,抓取工具不会爬行。这是我非常简单的代码:

import scrapy
from scrapy_splash import SplashRequest

class MergentSpider(scrapy.Spider):
    name = 'mergent'
    start_urls = ['https://www.mergentarchives.com/searchResults.php?searchType=annualReports&companyName=3Com+Corp.&compNumber=37958&aracompNumber=0']


    def parse(self, response):
        url = response.url + "/searchResults.php?searchType=annualReports&companyName=3Com+Corp.&compNumber=37958&aracompNumber=0"
        yield SplashRequest(url=url, callback=self.start)
    
    def start(self,response):
        for document in response.css("a::attr(onclick"):
            print(document)

我添加了这些设置:

SPIDER_MIDDLEWARES = {
    'mergent_scraper.middlewares.MergentScraperSpiderMiddleware': 543,
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DOWNLOADER_MIDDLEWARES = {
    'mergent_scraper.middlewares.MergentScraperDownloaderMiddleware': 543,
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPLASH_URL = 'http://127.0.0.1:8050'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

我正在使用 docker 使用此命令:

sudo docker run -it -p 8050:8050 --rm scrapinghub/splash

我做错了什么?为什么这只蜘蛛不会爬行?

1 个答案:

答案 0 :(得分:0)

网站好像有登录?如果是这样,您需要查看:using-formrequest-from-response-to-simulate-a-user-login。抓取工具不会知道有登录表单。

一种快速测试方法是使用scrapy shell,发送启动请求并查看是否获得了您期望的页面。

相关问题