我无法让Scrapy蜘蛛抓取我的“发现”帐户页面。
我是Scrapy的新手。我已经阅读了所有相关文档,但是似乎无法正确获取表单请求。我已经添加了表单名,用户名和密码。
import scrapy
class DiscoverSpider(scrapy.Spider):
name = "Discover"
start_urls = ['https://www.discover.com']
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formname='loginForm',
formdata={'userID': 'userID', 'password': 'password'},
callback=self.after_login
)
def after_login(self, response):
# check login succeed before going on
if "authentication failed" in response.body:
self.logger.error("Login failed")
return
提交表单后,我希望蜘蛛程序可以抓取我的帐户页面。取而代之的是,蜘蛛被重定向到“ https://portal.discover.com/psv1/notification.html”。以下是Spider控制台的输出:
2018-12-26 11:39:46 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot:
MoneySpiders)
2018-12-26 11:39:46 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0,
libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.7.0,
Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)],
pyOpenSSL 18.0.0 (OpenSSL 1.0.2p 14 Aug 2018), cryptography 2.3.1,
Platform Windows-10-10.0.17134-SP0
2018-12-26 11:39:46 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'MoneySpiders', 'NEWSPIDER_MODULE': 'MoneySpiders.spiders',
'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['MoneySpiders.spiders']}
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled downloader
middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-26 11:39:47 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-26 11:39:47 [scrapy.core.engine] INFO: Spider opened
2018-12-26 11:39:47 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at
0 pages/min), scraped 0 items (at 0 items/min)
2018-12-26 11:39:47 [scrapy.extensions.telnet] DEBUG: Telnet console
listening on
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET
https://www.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET
https://www.discover.com> (referer: None)
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET
https://portal.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:48 [scrapy.downloadermiddlewares.redirect] DEBUG:
Redirecting (302) to <GET
https://portal.discover.com/psv1/notification.html> from <POST
https://portal.discover.com/customersvcs/universalLogin/signin>
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET
https://portal.discover.com/psv1/notification.html> (referer:
https://www.discover.com)
2018-12-26 11:39:48 [scrapy.core.scraper] ERROR: Spider error processing
<GET https://portal.discover.com/psv1/notification.html> (referer:
https://www.discover.com)
答案 0 :(得分:1)
从响应中我得到了这一点:
当前无法访问您的帐户。过时的浏览器可以 使您的计算机面临安全风险。获得最佳体验 Discover.com,您可能需要将浏览器更新到最新版本 版本,然后重试。
因此,该网站似乎无法将您的蜘蛛识别为有效的浏览器。为了解决这个问题,您将需要设置一个适当的User-Agent以及可能是该浏览器常用的其他一些标头