抓取和抓取请求后,抓抓的抓取器暂停

时间:2019-07-18 12:32:38

标签: python scrapy pycharm

我正在尝试抓取MichaelKors.com。我的刮板正确地抓取并刮擦了572个项目。但是,它被卡在一个请求。这是日志:

2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/painterly-reef-print-crepe-ruffled-skirt/_/R-US_MU97ETPBPL> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/rainbow-stretch-viscose-pencil-skirt/_/R-US_MU97EYUBZV> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/rainbow-logo-striped-georgette-skirt/_/R-US_MU97EZ0BZL> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [scrapy.extensions.logstats] INFO: Crawled 664 pages (at 11 pages/min), scraped 575 items (at 14 items/min)
2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/striped-stretch-cotton-pencil-skirt/_/R-US_MU97EY1BVG> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/butterfly-print-crepe-wrap-skirt/_/R-US_MS97EX3AXN> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.michaelkors.com/medallion-lace-skirt/_/R-US_MU97EZ9BXW> (referer: https://www.michaelkors.com/women/clothing/skirts-shorts/_/N-28en)
2019-07-18 04:24:29 [urllib3.connectionpool] DEBUG: Starting new HTTPS connection (1): michaelkors.scene7.com:443

我的刮板代码是这样的:

class MichaelKorsClass(CrawlSpider):
    name = 'michaelkors'
    allowed_domains = ['www.michaelkors.com']
    start_urls = ['https://www.michaelkors.com/women/clothing/dresses/_/N-28ei']
    rules = (
        # Rule(LinkExtractor(allow=('(.*\/_\/R-\w\w_)([\-a-zA-Z0-9]*)$', ), deny=('((.*investors.*)|(/info/)|(contact\-us)|(checkout))',   )), callback='parse_product'),
        Rule(LinkExtractor(allow=('(.*\/_\/)(N-[\-a-zA-Z0-9]*)$',),
                           deny=('((.*investors.*)|(/info/)|(contact\-us)|(checkout) | (gifts))',),), callback='parse_list'),
    )



    def parse_product(self, response):
       ...

    def parse_list(self, response):
        hxs = HtmlXPathSelector(response)
        url = response.url

        is_listing_page = False
        product_count = hxs.select('//span[@class="product-count"]/text()').get()
        #print(re.findall('\d+', pc))


        try:
            product_count = int(product_count)
            is_listing_page = True
        except:
            is_listing_page = False
        if is_listing_page:
            for product_url in response.xpath('//ul[@class="product-wrapper product-wrapper-four-tile"]//li[@class="product-name-container"]/a/@href').getall():
                yield scrapy.Request(response.urljoin(product_url), callback=self.parse_product)

parse_list()通过检查列出的产品数量并通过爬网这些产品中的每一个来递归地爬网该站点,并且parse_product进行进一步的处理(例如下载等)。我的代码工作正常,但是,它停留在一个点我已经显示了日志。如果不会卡住,它将建立HTTP连接并请求图像url,如下所示:

2019-07-18 04:23:57 [urllib3.connectionpool] DEBUG: Starting new HTTPS connection (1): michaelkors.scene7.com:443
2019-07-18 04:24:00 [urllib3.connectionpool] DEBUG: https://michaelkors.scene7.com:443 "GET /is/image/MichaelKors/MH73E94C64-0100_2 HTTP/1.1" 200 7267

我希望我能正确解释我的问题。如果没有,请提及我应该在代码中添加或删除的内容。

0 个答案:

没有答案