Scrapy Authenticated Spider获取内部服务器错误

时间:2016-04-29 06:17:11

标签: python python-2.7 authentication scrapy internal-server-error

我试图制作经过身份验证的蜘蛛。我已经提到了几乎所有与Scrapy认证蜘蛛有关的帖子,我找不到任何答案。我使用了以下代码:

import scrapy

from scrapy.spider import BaseSpider
from scrapy.selector import  Selector
from scrapy.http import FormRequest, Request
import  logging
from PWC.items import PwcItem


class PwcmoneySpider(scrapy.Spider):
    name = "PWCMoney"
    allowed_domains = ["pwcmoneytree.com"]
    start_urls = (
        'https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid',
    )

    def parse(self, response):
        return [scrapy.FormRequest("https://www.pwcmoneytree.com/Account/Login",
                                   formdata={'UserName': 'user', 'Password': 'pswd'},
                                   callback=self.after_login)]

    def after_login(self, response):
      if "authentication failed" in response.body:
        self.log("Login failed", level=logging.ERROR)
        return
    # We've successfully authenticated, let's have some fun!
    print("Login Successful!!")
    return Request(url="https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid",
               callback=self.parse_tastypage)


    def parse_tastypage(self, response):
      for sel in response.xpath('//div[@id="MainDivParallel"]'):
                                      item = PwcItem()
                                      item['name'] = sel.xpath('div[@id="CompDiv"]/h2/text()').extract()
                                      item['location'] = sel.xpath('div[@id="CompDiv"]/div[@id="infoPane"]/div[@class="infoSlot"]/div/a/text()').extract()
                                      item['region'] = sel.xpath('div[@id="CompDiv"]/div[@id="infoPane"]/div[@id="contactInfoDiv"]/div[1]/a[2]/text()').extract()
                                      yield item

我得到了以下输出:

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-29 11:37:35 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-29 11:37:35 [scrapy] INFO: Optional features available: ssl, http11
2016-04-29 11:37:35 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
 'test.csv', 'BOT_NAME': 'PWC'}
2016-04-29 11:37:35 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-29 11:37:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-29 11:37:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-29 11:37:36 [scrapy] INFO: Enabled item pipelines:
2016-04-29 11:37:36 [scrapy] INFO: Spider opened
2016-04-29 11:37:36 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-04-29 11:37:36 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-29 11:37:37 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 1 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 2 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Gave up retrying <POST https://www.pwcmoneyt
ree.com/Account/Login> (failed 3 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Crawled (500) <POST https://www.pwcmoneytree
.com/Account/Login> (referer: None)
2016-04-29 11:37:38 [scrapy] DEBUG: Ignoring response <500 https://www.pwcmoneyt
ree.com/Account/Login>: HTTP status code is not handled or not allowed
2016-04-29 11:37:38 [scrapy] INFO: Closing spider (finished)
2016-04-29 11:37:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 954,
 'downloader/request_count': 3,
 'downloader/request_method_count/POST': 3,
 'downloader/response_bytes': 30177,
 'downloader/response_count': 3,
 'downloader/response_status_count/500': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 4, 29, 6, 7, 38, 674000),
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2016, 4, 29, 6, 7, 36, 193000)}
2016-04-29 11:37:38 [scrapy] INFO: Spider closed (finished)

由于我是python和Scrapy的新手,我似乎无法理解错误,我希望有人可以帮助我。

所以,我根据Rejected的建议修改了这样的代码,仅显示修改后的部分:

allowed_domains = ["pwcmoneytree.com"]
    start_urls = (
        'https://www.pwcmoneytree.com/Account/Login',
    )

    def start_requests(self):
        return [scrapy.FormRequest.from_response("https://www.pwcmoneytree.com/Account/Login",
                                   formdata={'UserName': 'user', 'Password': 'pswd'},
                                   callback=self.logged_in)]

并收到以下错误:

C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-30 11:04:47 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-30 11:04:47 [scrapy] INFO: Optional features available: ssl, http11
2016-04-30 11:04:47 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
 'test.csv', 'BOT_NAME': 'PWC'}
2016-04-30 11:04:50 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-30 11:04:54 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-30 11:04:54 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-30 11:04:54 [scrapy] INFO: Enabled item pipelines:
Unhandled error in Deferred:
2016-04-30 11:04:54 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):
  File "c:\python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_comm
and
    cmd.run(args, opts)
  File "c:\python27\lib\site-packages\scrapy\commands\crawl.py", line 57, in run

    self.crawler_process.crawl(spname, **opts.spargs)
  File "c:\python27\lib\site-packages\scrapy\crawler.py", line 153, in crawl
    d = crawler.crawl(*args, **kwargs)
  File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1274, in
unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1128, in
_inlineCallbacks
    result = g.send(result)
  File "c:\python27\lib\site-packages\scrapy\crawler.py", line 72, in crawl
    start_requests = iter(self.spider.start_requests())
  File "C:\Python27\PWC\PWC\spiders\PWCMoney.py", line 16, in start_requests
    callback=self.logged_in)]
  File "c:\python27\lib\site-packages\scrapy\http\request\form.py", line 36, in
from_response
    kwargs.setdefault('encoding', response.encoding)
exceptions.AttributeError: 'str' object has no attribute 'encoding'
2016-04-30 11:04:54 [twisted] CRITICAL:

2 个答案:

答案 0 :(得分:1)

如您的错误日志中所示,它是https://www.pwcmoneytree.com/Account/Login的POST请求,该请求给出了500错误。

我尝试使用POSTman手动发出相同的POST请求。它提供500错误代码和包含此错误消息的HTML页面:

  

所需的防伪cookie&#34; __ RequestVerificationToken&#34;不存在。

这是许多API和网站用来防止CSRF攻击的功能。如果您仍想要抓取该网站,则必须首先访问登录表单并在登录前获取正确的cookie。

答案 1 :(得分:0)

您让爬虫无缘无故地工作。您的第一个请求(以start_urls启动)正在处理中,然后丢弃响应。这样做很少有理由(除非要求本身是一个要求)。

相反,请将您的start_urls更改为“https://www.pwcmoneytree.com/Account/Login”,并将scrapy.FormRequest(...)更改为scrapy.FormRequest.from_response(...)。您还需要将提供的URL更改为收到的响应(并可能标识所需的表单)。

这将为您节省浪费的请求,提取/预先填写其他验证令牌以及清理您的代码。

编辑:以下是您应该使用的代码。注意:您已将self.after_login更改为self.logged_in,因此我将其保留为较新的更改。

...
allowed_domains = ["pwcmoneytree.com"]
start_urls = (
    'https://www.pwcmoneytree.com/Account/Login',
)

def parse(self, response):
    return scrapy.FormRequest.from_response(response,
                               formdata={'UserName': 'user', 'Password': 'pswd'},
                               callback=self.logged_in)
...