没有Docker的情况下如何使用Scrapy-Splash?

时间:2019-07-26 08:57:57

标签: python-3.x scrapy scrapy-splash

是在没有docker的情况下使用刮擦飞溅的方法。我的意思是,我有一台运行python3且未安装docker的服务器。而且如果可能的话,我不想在上面安装docker。

此外,SPLASH_URL的确切作用是什么。我只能使用服务器的IP吗?

我已经尝试过一些东西:

    def start_requests(self):
        url = ["europages.fr/entreprises/France/pg-20/resultats.html?ih=01510;01505;01515;01525;01530;01570;01565;01750;01590;01595;01575;01900;01920;01520;01905;01585;01685;01526;01607;01532;01580;01915;02731;01700;01600;01597;01910;01906"]
        print(url)
        yield SplashRequest(url = 'https://' + url[0], callback = self.parse_all_links,
            args={
                # optional; parameters passed to Splash HTTP API
                'wait': 0.5,

                # 'url' is prefilled from request url
                # 'http_method' is set to 'POST' for POST requests
                # 'body' is set to request body for POST requests
            } # optional; default is render.html
        ) ## TO DO : Changer la callback

with setting.py

SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
    #'Europages.middlewares.EuropagesDownloaderMiddleware': 543,
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

AND

SPLASH_URL =“我的服务器的网址”

希望我的帖子清楚。

谢谢 问候

0 个答案:

没有答案