我正在使用BeautifulSoup从HTML页面中提取类别和子类别。 html看起来像这样:
<a class='menuitem submenuheader' href='#'>Beverages</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=053&catid=055'>Juice</a></li></ul></div><a class='menuitem submenuheader' href='#'>DIY</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=007&catid=052'>Miscellaneous</a></li><li><a href='productlist.aspx?parentid=007&catid=047'>Sockets</a></li><li><a href='productlist.aspx?parentid=007&catid=046'>Spanners</a></li><li><a href='productlist.aspx?parentid=007&catid=045'>Tool Boxes</a></li></ul></div><a class='menuitem submenuheader' href='#'>Electronics</a><div class='submenu'><ul><li><a href='productlist.aspx?parentid=003&catid=019'>Audio/Video</a></li><li><a href='productlist.aspx?parentid=003&catid=027'>Cameras</a></li><li><a href='productlist.aspx?parentid=003&catid=023'>Cookers</a></li><li><a href='productlist.aspx?parentid=003&catid=024'>Freezers</a></li><li><a href='productlist.aspx?parentid=003&catid=025'>Kitchen Appliances</a></li><li><a href='productlist.aspx?parentid=003&catid=048'>Measuring Instruments</a></li><li><a href='productlist.aspx?parentid=003&catid=020'>Microwaves</a></li><li><a href='productlist.aspx?parentid=003&catid=050'>Miscellaneous</a></li><li><a href='productlist.aspx?parentid=003&catid=026'>Personal Care</a></li><li><a href='productlist.aspx?parentid=003&catid=021'>Refrigerators</a></li><li><a href='productlist.aspx?parentid=003&catid=018'>TV</a></li><li><a href='productlist.aspx?parentid=003&catid=022'>Washers/Dryers/Vacuum Cleaners</a></li></ul></div>
饮料是类别,果汁是子类别。
我有以下代码用于提取类别:
from bs4 import BeautifulSoup
import re
import urllib2
url = "http://www.myprod.com"
def main():
response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup(html)
categories = soup.findAll("a", {"class" :'menuitem submenuheader'})
for cat in categories:
print cat.contents[0]
我如何以这种格式获得子类别?
[Beverages = Category]
[Juice = Sub]
[DIY = Category]
[Miscellaneous = Sub]
[Spanners = Sub]
[Sockets = Sub]
[Electronics]
[Audio = Sub]
[Cameras]
答案 0 :(得分:0)
从每个类别的html中你必须找到下一个元素,并从那里找到它的li元素:
print cat.findNext().findAll('li')
答案 1 :(得分:0)
考虑到你的html总是有那些子菜单 div,最好是以cats[i]
对应subcats[i]
的方式为类别返回一个列表,为子类别返回另一个列表或者,根据你的需要,返回一本字典。
在Python shell中:
>>> from BeautifulSoup import BeautifulSoup
>>> html = '''<a class="menuitem submenuheader" href="#">Beverages</a>
... <div class="submenu">
... <ul>
... <li><a href="productlist.aspx?parentid=053&catid=055">Juice</a></li>
... <li><a href="productlist.aspx?parentid=053&catid=055">Milk</a></li>
... </ul>
... </div>
... <a class="menuitem submenuheader" href="#">DIY</a>
... <div class="submenu">
... <ul>
... <li><a href="productlist.aspx?parentid=053&catid=055">Micellaneous</a></li>
... <li><a href="productlist.aspx?parentid=053&catid=055">Spanners</a></li>
... <li><a href="productlist.aspx?parentid=053&catid=055">Sockets</a></li>
... </ul>
... </div>'''
>>> soup = BeautifulSoup(html)
>>> categories = soup.findAll("a", {"class": 'menuitem submenuheader'})
>>> cats = [cat.text for cat in categories]
>>> sub_menus = soup.findAll("div", {"class": "submenu"})
>>> subcats = []
>>> for menu in sub_menus:
... subcat = [item.text for item in menu.findAll('li')]
... subcats.append(subcat)
...
>>> print cats
[u'Beverages', u'DIY']
>>> print subcats
[[u'Juice', u'Milk'], [u'Micellaneous', u'Spanners', u'Sockets']]
>>> cat_dict = dict(zip(cats,subcats))
>>> print cat_dict
{u'Beverages': [u'Juice', u'Milk'], u'DIY': [u'Micellaneous', u'Spanners', u'Sockets']}
答案 2 :(得分:0)
查看相关网页,看起来所有新故事都在h3
标记中,类别为item-heading
。您可以使用BeautifulSoup选择所有故事标题,然后向上逐步访问它们所包含的a href
:
In [54]: [i.parent.attrs["href"] for i in soup.select('a > h3.item-heading')]
Out[55]:
[{'href': '/news/us-news/civil-rights-groups-fight-trump-s-refugee-ban-uncertainty-continues-n713811'},
{'href': '/news/us-news/protests-erupt-nationwide-second-day-over-trump-s-travel-ban-n713771'},
{'href': '/politics/politics-news/some-republicans-criticize-trump-s-immigration-order-n713826'},
... # trimmed for readability
]
我使用了列表理解,但您可以分解为复合步骤:
# select all `h3` tags with the matching class that are contained within an `a` link.
# This excludes any random links elsewhere on the page.
story_headers = soup.select('a > h3.item-heading')
# Iterate through all the matching `h3` items and access their parent `a` tag.
# Then, within the parent you have access to the `href` attribute.
list_of_links = [i.parent.attrs for i in story_headers]
# Finally, extract the links into a tidy list
links = [i["href"] for i in list_of_links]
获得链接列表后,您可以遍历它以检查第一个字符是/
是否只匹配本地链接而不是外部链接。