爬网时权限被拒绝错误

时间:2019-07-23 15:10:07

标签: python web-scraping scrapy

抓取以下网站http://www.starcitygames.com/buylist/,但我不断收到以下错误,并且我不知道是什么原因引起的。当我第一次编写程序时,它工作正常,没有任何错误,正在抓取我需要的数据以及所有内容,但是现在我收到了此错误,我不知道为什么,尝试更改Splash URL和用户代理,但这没有用,仍然给了我同样的错误:

2019-07-23 12:37:28 [scrapy.core.engine] INFO: Spider opened
2019-07-23 12:37:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-23 12:37:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-07-23 12:37:28 [scrapy.extensions.throttle] INFO: slot: www.starcitygames.com | conc: 1 | delay:15000 ms (+0) | latency:  148 ms | size:     0 bytes
2019-07-23 12:37:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (307) to <GET https://www.starcitygames.com/login> from <GET http://www.starcitygames.com/buylist/>
2019-07-23 12:37:43 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.starcitygames.com/login> (failed 1 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:04 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.starcitygames.com/login> (failed 2 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www.starcitygames.com/login> (failed 3 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.starcitygames.com/login>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.ConnectError: An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.core.engine] INFO: Closing spider (finished)

LoginSpider.py

# Import needed functions and call needed python files
import scrapy
import json
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import DataItem

# Spider class
class LoginSpider(scrapy.Spider):
    # Name of spider
    name = "LoginSpider"

    #URL where dated is located
    start_urls = ["http://www.starcitygames.com/buylist/"]

    # Login function
    def parse(self, response):
        # Login using email and password than proceed to after_login function
        return scrapy.FormRequest.from_response(
        response,
        formcss='#existing_users form',
        formdata={'ex_usr_email': 'example@email.com', 'ex_usr_pass': 'password'},
        callback=self.after_login
        )


    # Function to barse buylist website
    def after_login(self, response):
        # Loop through website and get all the ID numbers for each category of card and plug into the end of the below
        # URL then go to parse data function
        for category_id in response.xpath('//select[@id="bl-category-options"]/option/@value').getall():
            yield scrapy.Request(
                    url="http://www.starcitygames.com/buylist/search?search-type=category&id={category_id}".format(category_id=category_id),
                    callback=self.parse_data,
                    )
    # Function to parse JSON dasta
    def parse_data(self, response):
        # Declare variables
        jsonreponse = json.loads(response.body_as_unicode())
        # Call DataItem class from items.py
        items = DataItem()

        # Scrape category name
        items['Category'] = jsonreponse['search']
        # Loop where other data is located
        for result in jsonreponse['results']:
            # Inside this loop, run through loop until all data is scraped
            for index in range(len(result)):
                # Scrape the rest of needed data
                items['Card_Name'] = result[index]['name']
                items['Condition'] = result[index]['condition']
                items['Rarity'] = result[index]['rarity']
                items['Foil'] = result[index]['foil']
                items['Language'] = result[index]['language']
                items['Buy_Price'] = result[index]['price']
                # Return all data
                yield items

settings.py

# Name of project
BOT_NAME = 'LoginSpider'

# Module where spider is
SPIDER_MODULES = ['LoginSpider.spiders']
# Mode where to create new spiders
NEWSPIDER_MODULE = 'LoginSpider.spiders'

# Obey robots.txt rules set by website, disable to not be detected as web scraper
ROBOTSTXT_OBEY = False

# The path of the csv file that contains the proxies/user agnets paired with URLs
#PROXY_CSV_FILE = "url.csv"
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
# The downloader middleware is a framework of hooks into Scrapy's request/response processing.
# It's a light, low-level system for globally altering Scrapy's requests and responses.
DOWNLOADER_MIDDLEWARES = {
        # This middleware enables working with sites that require cookies, such as those that use sessions.
        # It keeps track of cookies sent by web servers, and send them back on subsequent requests (from that spider), just like web browsers do.
        'scrapy_splash.SplashCookiesMiddleware': 723,

        'scrapy_splash.SplashMiddleware': 725,
        # This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.
        # This middleware also supports decoding brotli-compressed responses, provided brotlipy is installed.
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

# URL that splash server is running on, must be activated to use splash
SPLASH_URL = 'http://199.89.192.98:8050'
# The class used to detect and filter duplicate requests
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
# This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

# The maximum number of concurrent (ie. simultaneous) requests that will be performed by the Scrapy downloader (default: 16)

CONCURRENT_ITEMS = 1
CONCURRENT_REQUESTS = 1

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs

# If enabled, Scrapy will wait a random amount of time (between 0.5 * DOWNLOAD_DELAY and 1.5 * DOWNLOAD_DELAY) while fetching requests from the same website.
RANDOMIZE_DOWNLOAD_DELAY = True
# Delay between scraping webpages
DOWNLOAD_DELAY = 10
# The download delay setting will honor only one of:
# Number of concurrent requests made to one URL(enabled)
CONCURRENT_REQUESTS_PER_DOMAIN = 1
# Number of concurrent requests made to one IP(disabled)
#CONCURRENT_REQUESTS_PER_IP = 1

# Disable cookies (enabled by default)
# Whether to enable the cookies middleware. If disabled, no cookies will be sent to web servers.
COOKIES_ENABLED = True
#REDIRECT_ENABLED = False
# Disable Telnet Console (enabled by default)
# A boolean which specifies if the telnet console will be enabled (provided its extension is also enabled)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
        'Referer': 'http://www.starcitygames.com/buylist/'
}

3 个答案:

答案 0 :(得分:0)

Permission denied.

在99%的情况下,这意味着您的IP在一段时间内被禁止了。 我的建议:

  1. 等待直到您被允许发出新请求或尝试登录 凭此信誉在网站上
  2. 添加代理。您可以使用tor-polipo-haproxy码头工人形象。也许会有帮助

答案 1 :(得分:0)

大多数站点不喜欢抓取程序,因为它们在服务器上增加了很多负载。

您可以尝试在代码中增加DOWNLOAD_DELAY的值,或者尝试使用另一种更友好的网站抓取方法,例如使用Selenium

答案 2 :(得分:0)

结束修复工作的是安装了Scrapy-Cookies,该中间件使Scrapy可以通过各种方式管理,保存和还原cookie。通过此中间件,Scrapy可以轻松地重复使用保存在多个蜘蛛之前或之中的cookie,甚至可以在蜘蛛集群之间共享蜘蛛之间的cookie。因此,能够共享Cookie可以解决此问题。另外,我还将此代码添加到了settings.py

DOWNLOADER_MIDDLEWARES.update({
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': None,
    'scrapy_cookies.downloadermiddlewares.cookies.CookiesMiddleware': 700,
})
COOKIES_STORAGE = 'scrapy_cookies.storage.sqlite.SQLiteStorage'
COOKIES_SQLITE_DATABASE = ':memory:'
COOKIES_PERSISTENCE_DIR = 'your-cookies-path'