如何按照链接列表从scrapy中的页面获取数据?

时间:2017-03-29 03:40:55

标签: python web-scraping scrapy scrapy-spider

我有一个网页要刮。在页面上,是<table>中的链接列表。我正在尝试使用规则部分来请求Scrapy浏览链接,并获取链接所针对的页面上的数据。以下是我的代码:

class ToScrapeSpiderXPath(scrapy.Spider):
    name = 'coinmarketcap'
    start_urls = [
        'https://coinmarketcap.com/currencies/views/all/'
    ]

    rules = (
        Rule(LinkExtractor(allow=(), restrict_xpaths=('//tr/td[2]/a/@href',)), callback="parse", follow= True),
    )

    def parse(self, response):
        print("TEST TEST TEST")
        BTC = BTCItem()
        BTC['source'] = str(response.request.url).split("/")[2]
        BTC['asset'] = str(response.request.url).split("/")[4],
        BTC['asset_price'] = response.xpath('//*[@id="quote_price"]/text()').extract(),
        BTC['asset_price_change'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/span[2]/text()').extract(),
        BTC['BTC_price'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/small[1]/text()').extract(),
        BTC['Prct_change'] = response.xpath('/html/body/div[2]/div/div[1]/div[3]/div[2]/small[2]/text()').extract()
        yield (BTC)

我的问题是Scrapy没有关注链接。它只是尝试从该链接提取数据的链接。我错过了什么?

更新#1: 为什么爬行vs刮?

2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/pivx/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/zcash/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://coinmarketcap.com/currencies/bitcoin/> (referer: None)
2017-03-28 23:10:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://coinmarketcap.com/currencies/nem/>

1 个答案:

答案 0 :(得分:1)

您需要从CrawlSpider类继承链接提取器才能工作:

from scrapy.spiders import CrawlSpider
from scrapy.spiders import Rule
from scrapy.contrib.linkextractors import LinkExtractor


class ToScrapeSpiderXPath(CrawlSpider):
    name = 'coinmarketcap'
    start_urls = [
        'https://coinmarketcap.com/currencies/views/all/'
    ]

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//tr/td[2]/a'), callback="parse_table_links", follow= True),
    )

    def parse_table_links(self, response):
        print(response.url)

请注意,您需要修复restrict_xpaths值 - 它应指向a元素,而不是元素的@href属性。并且,您可以将其定义为字符串而不是元组。

此外,allow参数是可选的。