经过验证的蜘蛛分页。 302重定向。 reqvalidation.asps-找不到页面

时间:2019-05-01 17:34:59

标签: python-3.x authentication scrapy session-cookies

我的小伙伴可以成功登录ancestry.com。然后,我使用该经过身份验证的会话来返回新链接,并且可以成功刮取新链接的第一页。当我尝试转到第二页时,就会发生此问题。我收到302重定向调试消息,并且此URL:https://secure.ancestry.com/error/reqvalidation.aspx?aspxerrorpath=http%3a%2f%2fsearch.ancestry.com%2ferror%2fPageNotFound&msg=&ti=0>。

我遵循了文档,并按照此处的一些建议进行了深入了解。每个页面都需要会话令牌吗?如果是这样,我该怎么做?

import scrapy
from scrapy import Request
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.http import FormRequest
from scrapy.loader import ItemLoader
from ..items import AncItem

class AncestrySpider(CrawlSpider):
    name = 'ancestry'

    def start_requests(self):
        return[
            FormRequest(
               'https://www.ancestry.com/account/signin?returnUrl=https%3A%2F%2Fwww.ancestry.com',
                formdata={"username": "foo", "password": "bar"},
                callback=self.after_login
            )
        ]

    def after_login(self, response):
        if "authentication failed".encode() in response.body:
            self.log("Login failed", level=log.ERROR)
            return
        else:
            return Request(url='https://www.ancestry.com/search/collections/nypl/?name=_Wang&count=50&name_x=_1',
                           callback=self.parse)

    def parse(self, response):
        all_products = response.xpath("//tr[@class='tblrow record']")
        for product in all_products:
            loader = ItemLoader(item=AncItem(), selector=product, response=response)
            loader.add_css('Name', '.srchHit')
            loader.add_css('Arrival_Date', 'td:nth-child(3)')
            loader.add_css('Birth_Year', 'td:nth-child(4)')
            loader.add_css('Port_of_Departure', 'td:nth-child(5)')
            loader.add_css('Ethnicity_Nationality', 'td:nth-child(6)')
            loader.add_css('Ship_Name', 'td:nth-child(7)')
            yield loader.load_item()

            next_page = response.xpath('//a[@class="ancBtn sml green icon iconArrowRight"]').extract_first()
            if next_page is not None:
                next_page_link = response.urljoin(next_page)
                yield scrapy.Request( url=next_page_link, callback=self.parse)

我累了添加一些请求标头信息。我尝试将Cookie信息添加到请求标头中,但这没有用。我尝试仅使用POST软件包中列出的USER代理。

现在我只得到50个结果。抓取所有页面后,我应该得到数百个。

1 个答案:

答案 0 :(得分:0)

找到了解决方案。它与对网站的身份验证无关。我需要找到一种处理分页的不同方法。我求助于使用页面网址进行分页,而不是点击“下一页”按钮链接。