由于“查看完整列表”按钮,最多10个项目

时间:2017-03-20 13:51:51

标签: python beautifulsoup html-parsing

link = http://fortune.com/worlds-most-admired-companies/2016/

所以,我希望div中的所有'href'都具有已知的“类名” 我无法逃脱这个:

import bs4 as bs
import urllib.request

raw = urllib.request.urlopen('http://fortune.com/worlds-most-admired-companies/2016/')
soup = bs.BeautifulSoup(raw, 'lxml')

listdiv = soup.find('div', clsss_="company-franchise-result-content current")

for url in listdiv.find_all('a'):
    print(url.get('href'))

我之前使用过:

for a in soup.find_all('a'):
    print(a.get('href'))

工作但只返回10项,从苹果到普通电动。甚至当我点击“查看完整列表”按钮时我得到的链接。 我不知道JSON是如何工作的,但它看起来就是这样。

1 个答案:

答案 0 :(得分:3)

完整的数据实际上存在于HTML 中。它位于script标记内的JavaScript对象中。您可以找到此script标记,获取它的文本,提取JSON字符串,使用json.loads()将其加载到Python数据结构中并获取所需数据:

In [1]: from bs4 import BeautifulSoup

In [2]: import json

In [3]: import re

In [4]: url = "http://fortune.com/worlds-most-admired-companies/2016/"

In [5]: response = requests.get(url)

In [6]: soup = BeautifulSoup(response.content, "lxml")   

In [7]: pattern = re.compile(r"var fortune_wp_vars = ({.*?});", re.DOTALL | re.MULTILINE)

In [8]: script = soup.find("script", text=pattern)   

In [9]: data = json.loads(pattern.search(script.get_text()).group(1))  

In [10]: companies = data["bootstrap"]["franchise"]["filtered_sorted_data"]

In [11]: for company in companies:
    ...:     print(company["title"])
    ...:     
Apple
Alphabet
...
Yum Brands
ZF Friedrichshafen
Zurich Insurance Group