我正在尝试解析这个xml(http://www.reddit.com/r/videos/top/.rss)并且遇到了麻烦。我试图在每个项目中保存youtube链接,但由于“channel”子节点而遇到麻烦。我如何达到这个级别,以便我可以遍历项目?
#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
#close file because we dont need it anymore:
reddit_file.close()
#entire feed
reddit_root = etree.fromstring(reddit_data)
channel = reddit_root.findall('{http://purl.org/dc/elements/1.1/}channel')
print channel
reddit_feed=[]
for entry in channel:
#get description, url, and thumbnail
desc = #not sure how to get this
reddit_feed.append([desc])
答案 0 :(得分:5)
您可以尝试findall('channel/item')
import urllib2
from xml.etree import ElementTree as etree
#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
print reddit_data
#close file because we dont need it anymore:
reddit_file.close()
#entire feed
reddit_root = etree.fromstring(reddit_data)
item = reddit_root.findall('channel/item')
print item
reddit_feed=[]
for entry in item:
#get description, url, and thumbnail
desc = entry.findtext('description')
reddit_feed.append([desc])
答案 1 :(得分:3)
我使用Xpath
表达式(成功测试)为您编写了这个:
from lxml import etree
import urllib2
headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request('http://www.reddit.com/r/videos/top/.rss', None, headers)
reddit_file = urllib2.urlopen(req).read()
reddit = etree.fromstring(reddit_file)
for item in reddit.xpath('/rss/channel/item'):
print "title =", item.xpath("./title/text()")[0]
print "description =", item.xpath("./description/text()")[0]
print "thumbnail =", item.xpath("./*[local-name()='thumbnail']/@url")[0]
print "link =", item.xpath("./link/text()")[0]
print "-" * 100