使用Scrapy + Splash的表单请求

时间:2018-12-14 22:56:57

标签: python python-3.x scrapy scrapy-splash

我正在尝试使用以下代码(对此帖子稍作修改)登录网站:

import scrapy
from scrapy_splash import SplashRequest
from scrapy.crawler import CrawlerProcess

class Login_me(scrapy.Spider):
    name = 'espn'
    allowed_domains = ['games.espn.com']
    start_urls = ['http://games.espn.com/ffl/leaguerosters?leagueId=774630']

    def start_requests(self):
        script = """
        function main(splash)
                local url = splash.args.url

                assert(splash:go(url))
                assert(splash:wait(10))

                local search_input = splash:select('input[type=email]')   
                search_input:send_text("user email")

                local search_input = splash:select('input[type=password]')
                search_input:send_text("user password!")

                assert(splash:wait(10))
                local submit_button = splash:select('input[type=submit]')
                submit_button:click()

                assert(splash:wait(10))

                return html = splash:html()
              end
            """

        yield SplashRequest(
            'http://games.espn.com/ffl/leaguerosters?leagueId=774630',
            callback=self.after_login,
            endpoint='execute',
            args={'lua_source': script}
            )
        def after_login(self, response):
            table = response.xpath('//table[@id="playertable_0"]')
            for player in table.css('tr[id]'):
                 item = {
                         'id': player.css('::attr(id)').extract_first(),
                        }    
                 yield item
            print(item)

我遇到了错误:

<GET http://games.espn.com/ffl/signin?redir=http%3A%2F%2Fgames.espn.com%2Fffl%2Fleaguerosters%3FleagueId%3D774630> from <GET http://games.espn.com/ffl/leaguerosters?leagueId=774630>
2018-12-14 16:49:04 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://games.espn.com/ffl/signin?redir=http%3A%2F%2Fgames.espn.com%2Fffl%2Fleaguerosters%3FleagueId%3D774630> (referer: None)
2018-12-14 16:49:04 [scrapy.core.scraper] ERROR: Spider error processing <GET http://games.espn.com/ffl/signin?redir=http%3A%2F%2Fgames.espn.com%2Fffl%2Fleaguerosters%3FleagueId%3D774630> (referer: None)

由于某些原因,我仍然无法登录。我在这里绕过许多不同的帖子,并尝试过“ splash:select”的许多不同变体,但似乎找不到我的问题。当我检查带有chrome的网页时,会看到此信息(密码使用类似的html):

 <input type="email" placeholder="Username or Email Address" autocapitalize="none" autocomplete="on" autocorrect="off" spellcheck="false" ng-model="vm.username" 
ng-pattern="/^[^<&quot;>]*$/" ng-required="true" did-disable-validate="" ng-focus="vm.resetUsername()" class="ng-pristine ng-invalid ng-invalid-required 
ng-valid-pattern ng-touched" tabindex="0" required="required" aria-required="true" aria-invalid="true">

我相信上面的html是用JS编写的。因此,我无法使用Scrapy抓取它,因此,我查看了页面的源代码,并且我认为与Splash一起使用的相关JS代码是这样的(虽然不确定):

function authenticate(params) {
        return makeRequest('POST', '/guest/login', {
            'loginValue': params.loginValue,
            'password': params.password
        }, {
            'Authorization': params.authorization,
            'correlation-id': params.correlationId,
            'conversation-id': params.conversationId,
            'oneid-reporting': buildReportingHeader(params.reporting)
        }, {
            'langPref': getLangPref()
        });
    }

有人可以向正确的方向推我吗?

1 个答案:

答案 0 :(得分:0)

这里的主要问题是登录表单位于iframe元素内。 我不知道scrapy_splash,所以下面的POC代码使用硒和漂亮的汤。但是飞溅的机制与之类似,您需要切换到iframe,然后在id消失后再返回。

import os
from bs4 import BeautifulSoup
from selenium import webdriver

USER = 'theUser'
PASS = 'thePassword'

fp = webdriver.FirefoxProfile()
driver = webdriver.Firefox(fp)
driver.get('http://games.espn.com/ffl/leaguerosters?leagueId=774630')

iframe = driver.find_element_by_css_selector('iframe#disneyid-iframe')
driver.switch_to.frame(iframe)
driver.find_element_by_css_selector("input[type='email']").send_keys(USER)
driver.find_element_by_css_selector("input[type='password']").send_keys(PASS)
driver.find_element_by_css_selector("button[type='submit']").click()

driver.switch_to.default_content()
soup_level1 = BeautifulSoup(driver.page_source, 'html.parser')

要使此代码正常工作,您需要在路径中安装firefox和geckodriver以及兼容版本。