我正尝试仅从以下html表中的item
和Skill Cap
列中解析数据:http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html
解析时,遇到了我的脚本从其他列进行解析的对齐问题。
import scrapy
class parser(scrapy.Spider):
name = "recipe_table"
start_urls = ['http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html']
def parse(self, response):
for row in response.xpath('//*[@class="datatable sortable"]//tr'):
data = row.xpath('td//text()').extract()
if not data: # skip empty row
continue
yield {
'name': data[0],
'cap': data[1],
# 'misc': data[2]
}
结果:scrapy runspider cap.py -t json
当到达第三行时,将解析意外列中的数据。我不确定选择的过程。
2019-05-09 19:41:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html> (referer: None)
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Banquet Set', 'cap': u'0'}
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Banquet Table', 'cap': u'0'}
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Cermet Kilij', 'cap': u'Cermet Kilij +1'}
答案 0 :(得分:1)
如何使用XPath显式设置源列?
for row in response.xpath('//*[@class="datatable sortable"]//tr'):
yield {
'name': row.xpath('./td[1]/text()').extract_first(),
'cap': row.xpath('./td[3]/text()').extract_first(),
# 'misc': etc.
}