在动态HTML网站上使用Beautiful Soup进行网页抓取的问题

时间:2018-07-19 04:09:08

标签: python beautifulsoup

我正在尝试使用Beautiful Soup刮擦一系列HTML文件,但是我得到了一些非常奇怪的结果,我认为这是因为查询是动态的,并且我对Web刮擦不是很有经验。如果您查看该网站,在这种情况下,我要做的就是获取工作类型的所有信息,但是我的结果与我希望的结果相去甚远。请在下面查看我的代码(感谢所有人):

 import requests
 from bs4 import BeautifulSoup
 url = 'https://www.acc.co.nz/for-providers/treatment-recovery/work-type-detail-sheets/#/viewSheet/1416'
 r = requests.get(url)
 html_doc = r.text
 soup = BeautifulSoup(html_doc)
 pretty_soup = soup.prettify()
 print(pretty_soup) 

感谢所有帮助。我以为我共享下面的代码,请注意,我从另一篇文章Strip HTML from strings in Python中引用了很多参考。没有@Andrej Kesely就不可能了

  url = "https://www.acc.co.nz/for-providers/treatment-recovery/work-type-detail-sheets/getSheets"

import requests
import json
from pandas.io.json import json_normalize

headers = {'X-Requested-With': 'XMLHttpRequest'}
r = requests.get(url, headers=headers)
data = json.loads(r.text)
result = json_normalize(data)

result = result[['ANZSCO','Comments','Description','Group',
             'EntryRequirements','JobTitle','PhysicalMentalDemands',
             'WorkEnvironment','WorkTasks']]


 ##Lets start cleaning up the data set

 from html.parser import HTMLParser

 class MLStripper(HTMLParser):
 def __init__(self):
    self.reset()
    self.strict = False
    self.convert_charrefs= True
    self.fed = []
def handle_data(self, d):
    self.fed.append(d)
def get_data(self):
    return ''.join(self.fed)


def strip_tags(html):
   s = MLStripper()
   s.feed(html)
   return s.get_data()


list = ['WorkTasks', 'PhysicalMentalDemands','WorkTasks','Description']

for i in list:
    result[i] = result[i].apply(lambda x: strip_tags(x))

list2 = ['Comments','EntryRequirements','WorkEnvironment']

for i in list2:
    result[i] = result[i].fillna('not_available')
    result[i] = result[i].apply(lambda x: strip_tags(x))

1 个答案:

答案 0 :(得分:2)

页面正在通过Ajax动态加载。查看网络检查器,该页面从位于https://www.acc.co.nz/for-providers/treatment-recovery/work-type-detail-sheets/getSheets的非常大的JSON文件中加载所有数据。要加载所有作业数据,可以使用以下脚本:

url = "https://www.acc.co.nz/for-providers/treatment-recovery/work-type-detail-sheets/getSheets"

import requests
import json

headers = {'X-Requested-With': 'XMLHttpRequest'}
r = requests.get(url, headers=headers)
data = json.loads(r.text)

# For printing all data in pretty form uncoment this line:
# print(json.dumps(data, indent=4, sort_keys=True))

for d in data:
    print(f'ID:\t{d["ID"]}')
    print(f'Job Title:\t{d["JobTitle"]}')
    print(f'Created:\t{d["Created"]}')
    print('*' * 80)

# Available keys in this JSON:
# ClassName
# LastEdited
# Created
# ANZSCO
# JobTitle
# Description
# WorkTasks
# WorkEnvironment
# PhysicalMentalDemands
# Comments
# EntryRequirements
# Group
# ID
# RecordClassName

此打印:

ID: 2327
Job Title:  Watch and Clock Maker and Repairer   
Created:    2017-07-11 11:33:52
********************************************************************************
ID: 2328
Job Title:  Web Administrator
Created:    2017-07-11 11:33:52
********************************************************************************
ID: 2329
Job Title:  Welder 
Created:    2017-07-11 11:33:52

...and so on

在脚本中,我编写了可用的键,您可以使用这些键来访问特定的工作数据。