我正在遵循Scrapy官方教程,我希望从http://quotes.toscrape.com获取数据,本教程将介绍如何使用以下蜘蛛抓取数据:
class QuotesSpiderCss(scrapy.Spider):
name = "quotes_css"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
quotes = response.css('div.quote')
for quote in quotes:
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags::text').extract()
}
然后将蜘蛛抓取到JSON文件,它会返回以下内容:
[
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cIt is our choices, Harry, that show what we truly are, far more than our abilities.\u201d", "author": "J.K. Rowling", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThere are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n "]},
...]
我尝试使用xpath而不是css编写相同的Spider:
class QuotesSpiderXpath(scrapy.Spider):
name = 'quotes_xpath'
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
def parse(self, response):
quotes = response.xpath('//div[@class="quote"]')
for quote in quotes:
yield {
'text': quote.xpath("//span[@class='text']/text()").extract_first(),
'author': quote.xpath("//small[@class='author']/text()").extract_first(),
'tags': quote.xpath("//div[@class='tags']/text()").extract()
}
但是这只蜘蛛会给我一个带有相同引用的列表:
[
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
...]
提前致谢!
答案 0 :(得分:1)
你得到相同引用的原因是因为你没有使用相对的XPath。请参阅documentation。
在XPath语句中添加前缀点,如下面的解析方法:
def parse(self, response):
quotes = response.xpath('//div[@class="quote"]')
for quote in quotes:
yield {
'text': quote.xpath(".//span[@class='text']/text()").extract_first(),
'author': quote.xpath(".//small[@class='author']/text()").extract_first(),
'tags': quote.xpath(".//div[@class='tags']/text()").extract()
}