BeautifulSoup-处理类似网站结构的表格|返回字典

时间:2019-11-09 01:17:47

标签: python beautifulsoup

我有一些html,看起来像字典:

制造商网站:网站,

总部:地点等 ..

每个部分都包含在自己的div中(因此,findAll,div类名称)。

是否存在优雅的简单方法将此类代码提取到字典中?还是必须遍历每个div,找到两个文本项,并假定第一个文本项是Dictionary的键,第二个值是同一dict元素的值。

示例站点代码:

    car = '''
     <div class="info flexbox">
       <div class="infoEntity">
        <span class="manufacturer website">
         <a class="link" href="http://www.ford.com" rel="nofollow noreferrer" target="_blank">
          www.ford.com
         </a>
        </span>
       </div>
       <div class="infoEntity">
        <label>
         Headquarters
        </label>
        <span class="value">
         Dearbord, MI
        </span>
       </div>
       <div class="infoEntity">
        <label>
         Model
        </label>
        <span class="value">
         Mustang
        </span>
       </div>
    '''

car_soup = BeautifulSoup(car, 'lxml')
print(car_soup.prettify())

elements = car_soup.findAll('div', class_ = 'infoEntity')
for x in elements:
    print(x)  ###and then we start iterating over x, with beautiful soup, to find value of each element.

期望的输出是这个

expected result result = {'manufacturer website':"ford.com", 'Headquarters': 'Dearborn, Mi', 'Model':'Mustang'}

P.S。在这一点上,我已经完成了几次非优雅的尝试,只是想知道我是否缺少某些东西,以及是否有更好的方法可以做到这一点。预先谢谢你!

2 个答案:

答案 0 :(得分:2)

当前的HTML结构非常通用,它包含多个infoEntity div,这些div的子内容可以采用多种格式设置。要解决此问题,您可以遍历infoEntity div并应用格式设置对象,如下所示:

from bs4 import BeautifulSoup as soup
result, label = {}, None
for i in soup(car, 'html.parser').find_all('div', {'class':'infoEntity'}):
   for b in i.find_all(['span', 'label']):
      if b.name == 'label':
         label = b.get_text(strip=True)
      elif b.name == 'span' and label is not None:
         result[label] = b.get_text(strip=True)
         label = None
      else:
         result[' '.join(b['class'])] = b.get_text(strip=True)

输出:

{'manufacturer website': 'www.ford.com', 'Headquarters': 'Dearbord, MI', 'Model': 'Mustang'}

答案 1 :(得分:2)

或者,为了使事情或多或少保持通用和简单,您可以使用标签和制造商网站链接来分隔字段的处理:

soup = BeautifulSoup(car, 'lxml')

car_info = soup.select_one('.info')
data = {
    label.get_text(strip=True): label.find_next_sibling().get_text(strip=True)
    for label in car_info.select('.infoEntity label')
}
data['manufacturer website'] = car_info.select_one('.infoEntity a').get_text(strip=True)

print(data)

打印:

{'Headquarters': 'Dearbord, MI', 
 'Model': 'Mustang', 
 'manufacturer website': 'www.ford.com'}