Python - BeautifulSoup4 decompose()不起作用

时间:2014-07-26 14:19:39

标签: python python-2.7 python-3.x beautifulsoup lxml

我试图从此页面获取所有标题的类别。

from bs4 import BeautifulSoup
import urllib2

headers = {
        'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) \
         AppleWebKit/537.36 (KHTML, like Gecko) \
         Ubuntu Chromium/33.0.1750.152 Chrome/33.0.1750.152 Safari/537.36'
}
category_url = ''
html = urllib2.urlopen(urllib2.Request(category_url, None, headers)).read()
page = BeautifulSoup(html)
results = page.find('div', {'class': "results"}).find_all('li')

for res in results:
    category = res.find(attrs={'class': "category"}) or res.find(attrs={'class': "categories"})
    #print category  #till here, I'm getting correct data
    print category.b.decompose() #here is the problem? I should get the div element without <b> tag but it returns None

我得到None而不是更新的dom。

PS:如果您有任何改进此代码的建议请告诉我。我很乐意为更好的性能和pythonic代码进行更改。

1 个答案:

答案 0 :(得分:0)

分解从树中删除标记并返回None ,而不是剩余的树。这与list.appendlist.sort的工作方式类似。 (这些方法还修改了调用者并返回None。)

for res in results:
    category = res.find(attrs={'class': "category"}) or res.find(attrs={'class': "categories"})
    category.b.decompose()
    print(category)

产生类似

的输出
<div class="categories">

<span class="highlighted">Advertising</span> <span class="highlighted">Agencies</span> </div>

使用lxml:

import lxml.html as LH
import urllib2

category_url = 'http://www.localsearch.ae/en/category/Advertising-Agencies/1013'
doc = LH.parse(urllib2.urlopen(category_url))    
for category in doc.xpath(
    '//div[@class="category"]|//div[@class="categories"]'):
    b = category.find('b')
    category.remove(b)
    print(LH.tostring(category))