美丽的汤不获取所有HTML

时间:2018-07-16 14:28:47

标签: python python-3.x web-scraping beautifulsoup

我是爬虫和python的新手,我已经编写了用于爬网的代码,

This is the link。使用下面给出的代码。但是在响应中并没有所有的html。页面中间的数据未获取。我已经尝试了lxml和html.parser,但没有区别。

from bs4 import BeautifulSoup
import requests

url = 'http://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results/a'
response = requests.get(url)

soup = BeautifulSoup(response.content,'lxml')
print(soup)

我不知道可能缺少任何关键点或任何东西的原因。

1 个答案:

答案 0 :(得分:0)

from bs4 import BeautifulSoup
import requests

url = 'http://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results/a'
response = requests.get(url)

soup = BeautifulSoup(response.content,'html.parser')
for fund in soup.select("ul[class='list-unstyled list-indent'] > li > a"):
    print(fund.attrs['title'])

结果将是

Aberdeen Asia Pacific and Japan Equity (Class I) Accumulation
Aberdeen Asia Pacific and Japan Equity Accumulation Inclusive
Aberdeen Asia Pacific Equity (Class I) Accumulation
Aberdeen Asia Pacific Equity (Class I) Income
.
.
.
AXA WF Framlington Robotech (Class F) Accumulation
AXA WF Framlington Robotech (Class F) Income
AXA WF Framlington UK (Class L) Accumulation
AXA WF Global Strategic Bonds (Class I H) Accumulation