验证后scrapy解析错误的页面

时间:2014-10-05 04:17:18

标签: python web-scraping scrapy scrapy-spider

我有点新鲜,我从网上借了代码 我正在尝试在验证后解析页面中的内容,但我只获得了登录页面 好像我正在正确登录。最终,我想要一个特定的表,但是现在我对页面转储感到满意。

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy import log

class AesopSpider(scrapy.Spider):
    name = "alt"
    #allowed_domains = ["sub.aesoponline.com, kelly.aesoponline.com"]
    start_urls = (
        'http://kelly.aesoponline.com',
    )

    def parse(self, response):
        return scrapy.FormRequest.from_response(
                    response,
                    formdata={'id': '##', 'pin': '**'},
                    callback=self.after_login
                )


    def after_login(self, response):
        # check login succeed before going on
        if "authentication failed" in response.body:
            self.log("Login failed", level=log.ERROR)
            return
    # We've successfully authenticated, let's have some fun!
        else:
            return Request(url="https://sub.aesoponline.com/Substitute/Home",
                   callback=self.parse_tastypage)


    def parse_tastypage(self, response):
        filename = response.url.split("/")[-2]
        with open(filename, 'wb') as f:
            f.write(response.body)

我在termianl的结果是:

 sudo scrapy crawl alt
2014-10-05 00:14:33-0400 [scrapy] INFO: Scrapy 0.24.4 started (bot: kelly)
2014-10-05 00:14:33-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-10-05 00:14:33-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'kelly.spiders', 'SPIDER_MODULES': ['kelly.spiders'], 'BOT_NAME': 'kelly'}
2014-10-05 00:14:33-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-10-05 00:14:33-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-10-05 00:14:33-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-10-05 00:14:33-0400 [scrapy] INFO: Enabled item pipelines:
2014-10-05 00:14:33-0400 [alt] INFO: Spider opened
2014-10-05 00:14:33-0400 [alt] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-10-05 00:14:33-0400 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-10-05 00:14:33-0400 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-10-05 00:14:33-0400 [alt] DEBUG: Redirecting (302) to <GET https://kelly.aesoponline.com/> from <GET http://kelly.aesoponline.com>
2014-10-05 00:14:33-0400 [alt] DEBUG: Redirecting (302) to <GET https://kelly.aesoponline.com/login.asp> from <GET https://kelly.aesoponline.com/>
2014-10-05 00:14:33-0400 [alt] DEBUG: Crawled (200) <GET https://kelly.aesoponline.com/login.asp> (referer: None)
2014-10-05 00:14:33-0400 [alt] DEBUG: Redirecting (302) to <GET https://sub.aesoponline.com/Login/RedirectLogin?userId=##&pin=**&remember=false&pswd=&loginBaseUrl=kelly.aesoponline.com> from <POST https://kelly.aesoponline.com/login.asp?x=x&&pswd=>
2014-10-05 00:14:34-0400 [alt] DEBUG: Redirecting (302) to <GET https://sub.aesoponline.com/Substitute/Home> from <GET https://sub.aesoponline.com/Login/RedirectLogin?userId=7148513128&pin=1120&remember=false&pswd=&loginBaseUrl=kelly.aesoponline.com>
2014-10-05 00:14:34-0400 [alt] DEBUG: Crawled (200) <GET https://sub.aesoponline.com/Substitute/Home> (referer: https://kelly.aesoponline.com/login.asp)
2014-10-05 00:14:34-0400 [alt] DEBUG: Filtered duplicate request: <GET https://sub.aesoponline.com/Substitute/Home> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2014-10-05 00:14:34-0400 [alt] INFO: Closing spider (finished)
2014-10-05 00:14:34-0400 [alt] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 2125,
         'downloader/request_count': 6,
         'downloader/request_method_count/GET': 5,
         'downloader/request_method_count/POST': 1,
         'downloader/response_bytes': 49386,
         'downloader/response_count': 6,
         'downloader/response_status_count/200': 2,
         'downloader/response_status_count/302': 4,
         'dupefilter/filtered': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2014, 10, 5, 4, 14, 34, 400811),
         'log_count/DEBUG': 9,
         'log_count/INFO': 7,
         'request_depth_max': 2,
         'response_received_count': 2,
         'scheduler/dequeued': 6,
         'scheduler/dequeued/memory': 6,
         'scheduler/enqueued': 6,
         'scheduler/enqueued/memory': 6,
         'start_time': datetime.datetime(2014, 10, 5, 4, 14, 33, 527860)}
2014-10-05 00:14:34-0400 [alt] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

网站将您重定向到的网页(https://sub.aesoponline.com/Substitute/Home)是您要使用parse_tastypage解析的网页。

尝试从Request调用parse_tastypage,而不是启动另一个after_login来获取该页面。这样,您可以直接解析/ Home页面,如下所示:

def after_login(self, response):
    # check login succeed before going on
    if "authentication failed" in response.body:
        self.log("Login failed", level=log.ERROR)
        return
    # We've successfully authenticated, let's have some fun!
    else:
        self.log("Login succeeded", level=log.INFO)
        self.log("URL: %s" % (response.url), level=log.INFO)
        return self.parse_tastypage(response)

(或者,您可以使用dont_filter参数(http://doc.scrapy.org/en/latest/topics/request-response.html)禁用针对/ Substitute / Home的请求的重复过滤器,但这会有点浪费。)