scrapy spider error: 403 HTTP status code is not handled or not allowed

时间:2016-10-20 19:10:54

标签: python scrapy middleware http-status-code-403

I tried to run a spider and getting 403 error. I am pasting the code and output. I found a question similar to mine here. Since I am pretty new to scrapy, I didnt understand much and dont know how to fix my code.

the spider code:

import scrapy
from scrapy.selector import HtmlXPathSelector
from laptop_sample.items import LaptopSampleItem

class MySpider(scrapy.Spider):
    name = "laptop"
    allowed_domains = ["specout.com"]
    start_urls = ["http://laptops.specout.com/l/1149/4752-32352G50Mn"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        items = []
        item = LaptopSampleItem()
        item["title"] = hxs.select("//h1[@class='stnd-page-title']/span[@class='fn']/text()").extract()
        items.append(item)
        return items

This is the output I got:

2016-10-21 00:22:58 [scrapy] INFO: Scrapy 1.0.5 started (bot: laptop_sample)
2016-10-21 00:22:58 [scrapy] INFO: Optional features available: ssl, http11
2016-10-21 00:22:58 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'la
ptop_sample.spiders', 'SPIDER_MODULES': ['laptop_sample.spiders'], 'BOT_NAME': '
laptop_sample'}
2016-10-21 00:22:59 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsol
e, LogStats, CoreStats, SpiderState
2016-10-21 00:23:00 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, RetryMiddleware, DefaultHeadersMiddleware, Met
aRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddle
ware, ChunkedTransferMiddleware, DownloaderStats
2016-10-21 00:23:00 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-10-21 00:23:00 [scrapy] INFO: Enabled item pipelines:
2016-10-21 00:23:00 [scrapy] INFO: Spider opened
2016-10-21 00:23:00 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-10-21 00:23:00 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-21 00:23:00 [scrapy] DEBUG: Crawled (403) <GET http://laptops.specout.co
m/l/1149/4752-32352G50Mn> (referer: None)
2016-10-21 00:23:00 [scrapy] DEBUG: Ignoring response <403 http://laptops.specou
t.com/l/1149/4752-32352G50Mn>: HTTP status code is not handled or not allowed
2016-10-21 00:23:00 [scrapy] INFO: Closing spider (finished)
2016-10-21 00:23:00 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 192,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 1051,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 10, 20, 18, 53, 0, 750000),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 10, 20, 18, 53, 0, 35000)}
2016-10-21 00:23:00 [scrapy] INFO: Spider closed (finished)

I have added following line to the setup.py. But, it didnt fix the issue.

DOWNLOADER_MIDDLEWARES = {'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,}

Any insights will be appreciated.

2 个答案:

答案 0 :(得分:2)

当出现禁止请求时,会引发HTTP错误代码403。 Scrapy会自动将USER_AGENT添加为Scrapy/VERSION (+http://scrapy.org)到每个发送的请求。虽然这是不可取的,但解决方法是将USER_AGENT设置为模仿浏览器,例如Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0,将您标识为浏览器。您可以学习如何设置scrapy spider设置here

虽然看起来像您的问题,但可以使用selenium来实现。在解析来自selenium的每个响应后,您可以创建一个Web驱动程序实例来获取数据并创建scrapy项目对象。

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http import TextResponse
from laptop_sample.items import LaptopSampleItem
from selenium import webdriver

class MySpider(scrapy.Spider):
    name = "laptop"
    allowed_domains = ["specout.com"]
    start_urls = ["http://laptops.specout.com/l/1149/4752-32352G50Mn"]

    def __init__(self):

        # you can initialize any other instance of webdriver if you like
        self.url = "http://laptops.specout.com/l/1149/4752-32352G50Mn"
        self.webdriver = webdriver.Firefox()

    def parse(self, forbidden_resp):
        browser_resp = self.webdriver.get(self.url)

        # get the page source
        resp = browser_resp.page_source

        # init text response object
        response = TextResponse(url=self.url, body=resp, encoding='utf-8')

        # HtmlXPathSelector is deprecated in version 1.1.0 maybe you should think
        # about using the newer version of scrapy.Selector class instead
        hxs = HtmlXPathSelector(response)
        items = []
        item = LaptopSampleItem()
        item["title"] = hxs.select("//h1[@class='stnd-page-title']/span[@class='fn']/text()").extract()
        items.append(item)
        return items

答案 1 :(得分:-1)

最有可能的问题是机器人被拒绝了。有各种各样的模块来解决这个问题。如果您不了解蜘蛛代码,您应该查看scrapy Documentation

来源:https://en.wikipedia.org/wiki/HTTP_403