蜘蛛没有回应

时间:2020-08-27 14:01:16

标签: python web-scraping scrapy response

我对python和网络抓取非常陌生。我已经准备好解决问题,但没有任何帮助。我想刮掉所有DND自制怪物,以将其描述与原始描述进行比较。我已经准备好了原始版本,但被卡在自制机上。到目前为止,这是我的代码。这是我要抓取的网站:https://www.dndbeyond.com/homebrew/monsters

import scrapy

from scrapy.crawler import CrawlerProcess


class homebrew(scrapy.Spider):
       
    name = "homebrew"
    custom_settings = {
        'AUTOTHROTTLE_ENABLED': True,
    }
    user_agent = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0"}
    
    handle_httpstatus_list = [403, 404]
    
    

    def start_requests(self):       
        
        url = "https://www.dndbeyond.com/homebrew/monsters"     
        yield scrapy.Request(url=url, callback=self.get_urls)
 
    def get_urls(self, response):         
        
        urls = response.xpath('//a[@class = "link"]/@href').getall()
        for link in urls:
            print(link)      #this is my test print. But it never gets any url   

            yield response.follow(url="https://www.dndbeyond.com" + link, callback=self.get_all)
        
        next_page = response.xpath('//a[text()="Next"]/@href').get()       
        if next_page is not None:
           yield response.follow(url=next_page, callback=self.get_urls)
        
        
        
          
        
    def get_all(self, response):   #here follows other code, not relevant for my question```

This is my output:
2020-08-27 15:33:48 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True}
2020-08-27 15:33:48 [scrapy.extensions.telnet] INFO: Telnet Password: 1e7e3a035d452c29
2020-08-27 15:33:48 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.throttle.AutoThrottle']
2020-08-27 15:33:48 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-08-27 15:33:48 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-08-27 15:33:48 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-08-27 15:33:48 [scrapy.core.engine] INFO: Spider opened
2020-08-27 15:33:48 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-08-27 15:33:48 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-08-27 15:33:49 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.dndbeyond.com/homebrew/monsters> (referer: None)
2020-08-27 15:33:49 [scrapy.core.engine] INFO: Closing spider (finished)
2020-08-27 15:33:49 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 209,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 3881,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 8, 27, 13, 33, 49, 213985),
 'log_count/DEBUG': 1,
 'log_count/INFO': 9,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 8, 27, 13, 33, 48, 870714)}
2020-08-27 15:33:49 [scrapy.core.engine] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

class homebrew(scrapy.Spider):
       
    name = "homebrew"
    custom_settings = {
        'AUTOTHROTTLE_ENABLED': True,
    }
    user_agent = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0"}
    
    handle_httpstatus_list = [403, 404]
    
    

    def start_requests(self):       
        
        url = "https://www.dndbeyond.com/homebrew/monsters"     
        yield scrapy.Request(url=url, callback=self.get_urls)
 
    def get_urls(self, response):         
        
        urls = response.xpath('//a[@class="link"]/@href').extract()
        for link in urls:
            yield scrapy.Request(url="https://www.dndbeyond.com" + link, callback=self.get_all)
        
        next_page = response.xpath('//a[@rel="next"]/@href').extract_first() 
        if next_page:
           yield scrapy.Request(url=next_page, callback=self.get_urls)

-您的xpath错误(删除空格)

-不要以这种方式使用response.follow

-if next_page is not None是不好的编程用法,if next_page