scrapy-splash脚本找不到CSS选择器

时间:2017-07-05 00:05:51

标签: scrapy splash scrapy-splash

我试图制作一个scrapy-splash脚本来获取以下食品的链接:

https://www.realcanadiansuperstore.ca/Food/Meat-%26-Seafood/c/RCSS001004000000

当您第一次访问它时,它会让您选择一个区域。我想通过在下面的代码中设置cookies dict,我已经正确地处理了这个问题。我试图获取旋转木马中所有食品的链接。我正在使用splash,因为轮播是由javascript和常规请求制作的,并且用美丽的汤解析不会在html中显示它。我的问题是,我没有将任何数据存入我的物品中。字典。

import scrapy
from scrapy_splash import SplashRequest

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    start_urls = ["https://www.realcanadiansuperstore.ca/Food/Meat-%26-
    Seafood/c/RCSS001004000000"]


    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, cookies={'currentRegion' :'CA-BC'}, 
            callback = self.parse, endpoint = 'render.html', args = {'wait':0.5},
                            )

def parse(self, response):

    item = {}
    item['urls'] = []

    itemList = response.css('div.product-name-wrapper > a > ::attr(href)').extract()

    for links in itemList:
        item['urls'].append(links)

    yield item

我认为我的Cookie没有正确设置,因此它将我带到需要选择区域的页面。

顺便说一下,我也在docker控制台上运行了。如果我在浏览器中访问我的localhost,它会显示启动页面。

这是我从爬行蜘蛛获得的输出:

<GET https://www.realcanadiansuperstore.ca/Food/Meat-%26-
Seafood/c/RCSS001004000000 via http://localhost:8050/render.html> 
(referer: None)
2017-07-04 16:44:05 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.realcanadiansuperstore.ca/Food/Meat-%26-
Seafood/c/RCSS001004000000>
{'urls': []}

这里可能出现什么问题?我按照此处所述填写了我的设置文件:https://github.com/scrapy-plugins/scrapy-splash

好的,我已经能够通过像这样设置cookie来获取Splash的localhost浏览器实例来呈现我需要的HTML:

function main(splash)
    splash:add_cookie{"sessionid", "237465ghgfsd", "/", 
    domain="http://example.com"}
    splash:go("http://example.com/")
    return splash:html()
end

但这是在浏览器中作为您可以输入的脚本。如何将其应用于我的python脚本?在Python中添加cookie有不同的方法吗?

1 个答案:

答案 0 :(得分:0)

如果有适合您的脚本,您可以使用/ execute endpoint来执行此脚本:

yield SplashRequest(url, endpoint='execute', args={'lua_source': my_script})

scrapy-splash还允许设置透明的cookie处理,以便在SplashRequests中保持cookie,就像常规的scrapy.Requests一样:

script = """
function main(splash)
  splash:init_cookies(splash.args.cookies)
  assert(splash:go{
    splash.args.url,
    headers=splash.args.headers,
    http_method=splash.args.http_method,
    body=splash.args.body,
    })
  assert(splash:wait(0.5))

  local entries = splash:history()
  local last_response = entries[#entries].response
  return {
    url = splash:url(),
    headers = last_response.headers,
    http_status = last_response.status,
    cookies = splash:get_cookies(),
    html = splash:html(),
  }
end
"""

class MySpider(scrapy.Spider):

    # def my_parse...
    #   ...
        yield SplashRequest(url, self.parse_result,
            endpoint='execute',
            cache_args=['lua_source'],
            args={'lua_source': script},
        )

    def parse_result(self, response):
        # here response.body contains result HTML;
        # response.headers are filled with headers from last
        # web page loaded to Splash;
        # cookies from all responses and from JavaScript are collected
        # and put into Set-Cookie response header, so that Scrapy
        # can remember them.

请参阅scrapy-splash README中的examples