很累,只睡了3个小时,醒了20多个小时,原谅我的错误。
我正在尝试实现多个xpath选择器,但似乎无法得到它,显然这段代码有一个缺陷代码,重复描述并最终获取最后一项的描述并将其分配给所有项目,屏幕截图和代码:
显示我在视觉表现中看到的意义: this http://puu.sh/fBjA9/da85290fc2.png
代码(Scrapy Web Crawler Python): 蜘蛛
def parse(self, response):
item = DmozItem()
for sel in response.xpath("//td[@class='nblu tabcontent']"):
item['title'] = sel.xpath("a/big/text()").extract()
item['link'] = sel.xpath("a/@href").extract()
for sel in response.xpath("//td[contains(@class,'framed')]"):
item['description'] = sel.xpath("b/text()").extract()
yield item
管道
def process_item(self, item, spider):
self.cursor.execute("SELECT * FROM data WHERE title= %s", item['title'])
result = self.cursor.fetchall()
if result:
log.msg("Item already in database: %s" % item, level=log.DEBUG)
else:
self.cursor.execute(
"INSERT INTO data(title, url, description) VALUES (%s, %s, %s)",
(item['title'][0], item['link'][0], item['description'][0]))
self.connection.commit()
log.msg("Item stored : " % item, level=log.DEBUG)
return item
def handle_error(self, e):
log.err(e)
感谢您阅读并提供帮助。
答案 0 :(得分:1)
问题在于"//td[@class='nblu tabcontent']"
和"//td[contains(@class,'framed')]"
是一对一的对应关系;你不能在另一个里面迭代一个,或者你发现,你只能从内部列表中得到最后一个项目。
相反,请尝试
def parse(self, response):
title_links = response.xpath("//td[@class='nblu tabcontent']")
descriptions = response.xpath("//td[contains(@class,'framed')]")
for tl,d in zip(title_links, descriptions):
item = DmozItem()
item['title'] = tl.xpath("a/big/text()").extract()
item['link'] = tl.xpath("a/@href").extract()
item['description'] = d.xpath("b/text()").extract()
yield item
答案 1 :(得分:0)
我认为您只需要在for循环中移动项目实例化:
def parse(self, response):
for sel in response.xpath("//td[@class='nblu tabcontent']"):
item = DmozItem()
item['title'] = sel.xpath("a/big/text()").extract()
item['link'] = sel.xpath("a/@href").extract()
for sel in response.xpath("//td[contains(@class,'framed')]"):
item['description'] = sel.xpath("b/text()").extract()
yield item