我有html结构
<div class="column first">
<div class="detail">
<strong>Phone: </strong>
<span class="value"> 012-345-6789</span>
</div>
<div class="detail">
<span class="value">1 Street Address, Big Road, City, Country</span>
</div>
<div class="detail">
<h3 class="inline">Area:</h3>
<span class="value">Georgetown</span>
</div>
<div class="detail">
<h3 class="inline">Nearest Train:</h3>
<span class="value">Georgetown Station</span>
</div>
<div class="detail">
<h3 class="inline">Website:</h3>
<span class="value"><a href='http://www.website.com' target='_blank'>www.website.com</a></span>
</div>
</div>
当我在scrapy shell中运行sel = response.xpath('//span[@class="value"]/text()')
时,我得到了我期望的结果,即:
[<Selector xpath='//span[@class="value"]/text()' data=u' 012-345-6789'>, <Selector xpath='//span[@class="value"]/text()' data=u'1 Street Address, Big Road, City, Country'>, <Selector xpath='//span[@class="value"]/text()' data=u'Georgetown Station'>, <Selector xpath='//span[@class="value"]/text()' data=u' '>, <Selector xpath='//span[@class="value"]/text()' data=u'January, 2016'>]
然而,在我的scrapy蜘蛛的解析块中,它只返回第一个项目
def parse(self, response):
def extract_with_xpath(query):
return response.xpath(query).extract_first().strip()
yield {
'details': extract_with_xpath('//span[@class="value"]/text()')
}
我意识到我正在使用extract_first()
,但如果我使用extract()
它会中断,即使我知道extract()
是合法的功能。
我做错了什么?我需要循环播放
extract_with_xpath('//span[@class="value"]/text()')
部分?
感谢任何启蒙!
答案 0 :(得分:0)
在items.py中,指定 -
from scrapy.item import Item, Field
class yourProjectNameItem(Item):
# define the fields for your item here like:
name = Field()
details= Field()
在你的scrapy蜘蛛中: 进口:
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from yourProjectName.items import yourProjectNameItem
import re
和解析函数如下:
def parse_item(self, response):
hxs = HtmlXPathSelector(response)
i = yourProjectNameItem()
i['name'] = hxs.select('YourXPathHere').extract()
i['details'] = hxs.select('YourXPathHere').extract()
return i
希望这能解决问题。 你可以在git上引用我的项目:https://github.com/omkar-dsd/SRMSE/tree/master/Scrapers/NasaScraper