运行蜘蛛后,我将遇到错误
[scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-12-30 01:18:36 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2018-12-30 01:18:37 [scrapy.core.engine] DEBUG: Crawled (405) <GET https://www.propertyguru.com.sg/robots.txt> (referer: None) 2018-12-30 01:18:37 [scrapy.core.engine] DEBUG: Crawled (405) <GET https://www.propertyguru.com.sg/> (referer: None) 2018-12-30 01:18:38 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <405 https://www.propertyguru.com.sg/>: HTTP status
代码未处理或不允许
答案 0 :(得分:1)
您需要在请求中包括User-Agent
和cookies
:
def start_requests(self):
headers = {'User-Agent': 'your user agent'}
cookies = {'cookie-key': 'cookie-value'}
yield scrapy.Request(
url='https://www.propertyguru.com.sg/',
method='GET',
headers=headers,
cookies=cookies,
callback=self.parse,
errback=self.handle_err,
)
要获取User-Agent
和cookies
,请打开google chorme的开发人员控制台并输入:
navigator.userAgent
for User-Agent
document.cookie
(用于Cookie)