Scrapy和Splash不会爬行

时间:2016-01-28 21:09:14

标签: python scrapy web-crawler splash

我做了一个爬虫,启动工作(我在我的浏览器中测试过),scrapy虽然无法抓取并提取项目。

我的实际代码是:

# -*- coding: utf-8 -*-
import scrapy
import json
from scrapy.http.headers import Headers
from scrapy.spiders import CrawlSpider, Rule
from oddsportal.items import OddsportalItem



class OddbotSpider(CrawlSpider):
    name = "oddbot"
    allowed_domains = ["oddsportal.com"]
    start_urls = (
        'http://www.oddsportal.com/matches/tennis/',
    )

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, self.parse, meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 5.5}
            }
        })

    def parse(self, response):
        item = OddsportalItem()
        print response.body

2 个答案:

答案 0 :(得分:0)

尝试导入scrap_splash并通过SplashRequest以以下方式调用新请求:

from scrapy_splash import SplashRequest

yield SplashRequest(url, endpoint='render.html', args={'any':any})

答案 1 :(得分:0)

您应该修改CrawlSpider

def _requests_to_follow(self, response):
    if not isinstance(response, (HtmlResponse, SplashJsonResponse, SplashTextResponse)):
        return
    seen = set()
    for n, rule in enumerate(self._rules):
        links = [lnk for lnk in rule.link_extractor.extract_links(response)
                 if lnk not in seen]
        if links and rule.process_links:
            links = rule.process_links(links)
        for link in links:
            seen.add(link)
            r = self._build_request(n, link)
            yield rule.process_request(r)