抓狂出问题了

时间:2019-11-15 11:01:43

标签: python xpath web-scraping scrapy

我正在尝试使用Scrapy抓取Google App Store,我认为脚本是正确的,但实际上它没有抓取任何内容,我也不知道为什么会发生这种情况。

代码如下:

# -*- coding: utf-8 -*-
import scrapy

# from scrapy.spiders import CrawlSpider, Rule
# from scrapy.linkextractors import LinkExtractor
# from html.parser import HTMLParser as SGMLParser
from scrapy import Request

from gp.items import GpItem


class GoogleSpider(scrapy.Spider):
    # print("HELLO STARTING")
    name = 'google'
    allowed_domains = ['play.google.com']
    start_urls = ['https://play.google.com/store/apps/']

    '''
    rules = [
        Rule(LinkExtractor(allow=("https://play\.google\.com/store/apps/details",)), callback='parse_app', follow=True),
    ]
    '''

    def parse(self, response):
        print("CALLING PARSE")
        selector = scrapy.Selector(response)

        # print(response.body)

        urls = selector.xpath('//a[@class="LkLjZd ScJHi U8Ww7d xjAeve nMZKrb  id-track-click "]/@href').extract()

        link_flag = 0

        links = []
        for link in urls:
            # print("LINK" + str(link))
            links.append(link)

        for each in urls:
            yield Request(url="http://play.google.com" + links[link_flag], callback=self.parse_next, dont_filter=True)
            print("http://play.google.com" + links[link_flag])
            link_flag += 1

    def parse_next(self, response):
        selector = scrapy.Selector(response)

        # print(response)
        # app_urls = selector.xpath('//div[@class="details"]/a[@class="title"]/@href').extract()

        app_urls = selector.xpath('//div[@class="b8cIId ReQCgd Q9MA7b"]/a/@href').extract()

        urls = []
        for url in app_urls:
            url = "http://play.google.com" + url
            # print(url)
            urls.append(url)

        link_flag = 0
        for each in app_urls:
            yield Request(urls[link_flag], callback=self.parse_app, dont_filter=True)
            link_flag += 1

    def parse_app(self, response):
        item = GpItem()
        item['app_url'] = response.url
        item['app_name'] = response.xpath('//h1[@itemprop="name"]').xpath('text()').extract()
        item['app_icon'] = response.xpath('//img[@itemprop="image"]/@src')
        # item['app_developer'] = response.xpath('//')
        # print(response.text)
        yield item

信息显示在终端上:

2019-11-15 10:55:28 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: gp)
2019-11-15 10:55:28 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.7.1 (default, Dec 14 2018, 13:28:58) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a  20 Nov 2018), cryptography 2.4.2, Platform Darwin-18.5.0-x86_64-i386-64bit
2019-11-15 10:55:28 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'gp', 'NEWSPIDER_MODULE': 'gp.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['gp.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'}
2019-11-15 10:55:28 [scrapy.extensions.telnet] INFO: Telnet Password: ca6fa8970302a9fb
2019-11-15 10:55:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2019-11-15 10:55:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-11-15 10:55:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-11-15 10:55:28 [scrapy.middleware] INFO: Enabled item pipelines:
['gp.pipelines.GpPipeline']
2019-11-15 10:55:28 [scrapy.core.engine] INFO: Spider opened
2019-11-15 10:55:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-15 10:55:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-15 10:55:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://play.google.com/robots.txt> (referer: None)
2019-11-15 10:55:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.google.com/sorry/index?continue=https://play.google.com/store/apps/&q=EgRcBtHqGKCIuu4FIhkA8aeDS5g5w_8B4TmR8HklMm8-Dkpu_TRlMgFy> from <GET https://play.google.com/store/apps/>
2019-11-15 10:55:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.google.com/robots.txt> (referer: None)
2019-11-15 10:55:29 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://www.google.com/sorry/index?continue=https://play.google.com/store/apps/&q=EgRcBtHqGKCIuu4FIhkA8aeDS5g5w_8B4TmR8HklMm8-Dkpu_TRlMgFy>
2019-11-15 10:55:29 [scrapy.core.engine] INFO: Closing spider (finished)
2019-11-15 10:55:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
 'downloader/request_bytes': 1308,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 5372,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/302': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 11, 15, 10, 55, 29, 282065),
 'log_count/DEBUG': 4,
 'log_count/INFO': 9,
 'memusage/max': 54194176,
 'memusage/startup': 54194176,
 'response_received_count': 2,
 'robotstxt/forbidden': 1,
 'robotstxt/request_count': 2,
 'robotstxt/response_count': 2,
 'robotstxt/response_status_count/200': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2019, 11, 15, 10, 55, 28, 496345)}
2019-11-15 10:55:29 [scrapy.core.engine] INFO: Spider closed (finished)

我检查了 urls XPath选择器,我认为这是正确的。我不知道问题出在哪里。有人可以帮我解决这个问题吗?

0 个答案:

没有答案