如何将字符串中的Feed聚合在一起然后解析它们?

时间:2012-11-01 03:06:49

标签: python xml concatenation feed

我正在尝试聚合一些youtube供稿,连接它们,然后解析供稿。 当我自己解析单个Feed时,我没有遇到麻烦,代码似乎也能正常工作。但是,当我尝试将feed聚合为一个长字符串然后使用etree.fromstring(aggregate_partner_feed)时,我收到一个错误。我得到的错误是ParseError:未绑定的前缀和etree行(前面引用)作为错误给出。关于如何解决这个问题的任何建议?

aggregated_partners_list = [cnn, teamcoco, buzzfeed]


i = 1 
number_of_partners = len(aggregated_partners_list)
aggregate_partner_feed = '' 

for entry in aggregated_partners_list:
    #YOUTUBE FEED
    #download the file:
    file = urllib2.urlopen('http://gdata.youtube.com/feeds/api/users/'+entry+'/uploads?v=2&max-results=50')
    #convert to string:
    data = file.read()
    #close file because we dont need it anymore:
    file.close()

    if i == 1:
        #remove ending </feed>
        data = data[:-7]

    if i>1 and i != number_of_partners:
        data = data[data.find('<entry'):]
        data = data[:-7]
        #remove everything before first <entry> in the new feed and the last </entry>

    #if last, then only remove everything before first <entry>
    if i == number_of_partners:
        data = data[data.find('<entry'):]

    #append the current feed to the existing feed
    aggregate_partner_feed += data

    #increment the counter  
    i=i+1

print isinstance(data, basestring)                      #returns true
print isinstance(aggregate_partner_feed, basestring)    #returns true

#apply the parsing to the aggregated feed

#entire feed
root = etree.fromstring(aggregate_partner_feed)     #this is the line that give an error
#all entries
entries = root.findall('{http://www.w3.org/2005/Atom}entry')
#more code that seems to work...

1 个答案:

答案 0 :(得分:0)

我单独解析每个feed然后使用.append而不是将字符串连接在一起然后解析。