使用Python抓取ID元素

时间:2019-02-03 21:14:18

标签: python web-scraping beautifulsoup scrapy

对于一个研究项目,我正在应用一个非常基本的蜘蛛代码,以从Google新闻结果摘要中提取ID元素“ resultStats”(请参见下文)。但是,我通过搜索创建的CSV文件中没有显示结果。

下面的代码是否有任何巧妙的调整,以使其显示分配给'resultStats'的值?

我正在使用Scrapy模块进行此搜索,尽管我阅读了很多声明,BeautifulSoup可能是一个很好的补充。如果更可行,我很乐意考虑使用BS4模块。

import scrapy


class QuotesSpider(scrapy.Spider):
name = 'termcheck'
start_urls = [
    'https://www.google.com/search?q=elon+musk&biw=1440&bih=752&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2015%2Ccd_max%3A12%2F31%2F2015&tbm=nws',
]

def parse(self, response):
    for quote in response.css('div.quote'):
        yield {
            'resultStats': resultStats.css('span.text::text').get(),
        }

    next_page = response.css('li.next a::attr("href")').get()
    if next_page is not None:
        yield response.follow(next_page, self.parse)

下面的运行时日志:

$ scrapy runspider ~/Desktop/Python/quotes_spider.py -t csv
 2019-02-03 21:42:46 [scrapy.utils.log] INFO: Scrapy 1.6.0 started         (bot: scrapybot)
 2019-02-03 21:42:46 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0,    libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.7.2 (v3.7.2:9a3ffc0492, Dec 24 2018, 02:44:43) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a  20 Nov 2018), cryptography 2.5, Platform Darwin-18.0.0-x86_64-i386-64bit
 2019-02-03 21:42:46 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
 2019-02-03 21:42:46 [scrapy.extensions.telnet] INFO: Telnet Password: ae15baa4482b320f
 2019-02-03 21:42:46 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
 2019-02-03 21:42:47 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
 2019-02-03 21:42:47 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
 2019-02-03 21:42:47 [scrapy.middleware] INFO: Enabled item pipelines:
[]
 2019-02-03 21:42:47 [scrapy.core.engine] INFO: Spider opened
 2019-02-03 21:42:47 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
 2019-02-03 21:42:47 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
 2019-02-03 21:42:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET     https://www.google.com/search?q=elon+musk&biw=1440&bih=752&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2015%2Ccd_max%3A12%2F31%2F2015&tbm=nws> (referer: None)
 2019-02-03 21:42:47 [scrapy.core.engine] INFO: Closing spider (finished)
 2019-02-03 21:42:47 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 329,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 15468,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 3, 21, 42, 47, 720120),
 'log_count/DEBUG': 1,
 'log_count/INFO': 9,
 'memusage/max': 50012160,
 'memusage/startup': 50012160,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 2, 3, 21, 42, 47, 160085)}
 2019-02-03 21:42:47 [scrapy.core.engine] INFO: Spider closed     (finished)

0 个答案:

没有答案
相关问题