我是Python新手,今天就开始使用它了
我的系统环境为Python 3.5
,Windows10
上有一些库。
我想将以下网站的足球运动员数据提取为CSV文件。
问题:我无法将数据从soup.find_all('script')[17]
提取到我预期的CSV格式。如何根据需要提取这些数据?
我的代码如下所示。
from bs4 import BeautifulSoup
import re
from urllib.request import Request, urlopen
req = Request('http://www.futhead.com/squad-building-challenges/squads/343', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = BeautifulSoup(webpage,'html.parser') #not sure if i need to use lxml
soup.find_all('script')[17] #My target data is in 17th
我的预期输出与此相似
position,slot_position,slug
ST,ST,paulo-henrique
LM,LM,mugdat-celik
答案 0 :(得分:0)
所以我的理解是,beautifulsoup更适合HTML解析,但是你试图解析嵌套在HTML中的javascript。
所以你有两个选择
答案 1 :(得分:0)
正如@josiah Swain所说,它不会很漂亮。对于这类事情,建议使用JS,因为它可以理解你拥有的东西。
说,python很棒,这是你的解决方案!
command wget http://ftp.acc.umu.se/mirror/cdimage/snapshot/Debian/pool/main/libg/libgcrypt11/libgcrypt11_1.5.4-3_amd64.deb
command dpkg -i libgcrypt11_1.5.4-3_amd64.deb
我将复制并粘贴到python中的结果是:
command dpkg -i Brackets.1.3.Extract.64-bit.deb
编辑:在反思中,这不是初学者最容易阅读的代码。这是一个更易于阅读的版本
#Same imports as before
from bs4 import BeautifulSoup
import re
from urllib.request import Request, urlopen
#And one more
import json
# The code you had
req = Request('http://www.futhead.com/squad-building-challenges/squads/343',
headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = BeautifulSoup(webpage,'html.parser')
# Store the script
script = soup.find_all('script')[17]
# Extract the oneline that stores all that JSON
uncleanJson = [line for line in script.text.split('\n')
if line.lstrip().startswith('squad.register_players($.parseJSON') ][0]
# The easiest way to strip away all that yucky JS to get to the JSON
cleanJSON = uncleanJson.lstrip() \
.replace('squad.register_players($.parseJSON(\'', '') \
.replace('\'));','')
# Extract out that useful info
data = [ [p['position'],p['data']['slot_position'],p['data']['slug']]
for p in json.loads(cleanJSON)
if p['player'] is not None]
print('position,slot_position,slug')
for line in data:
print(','.join(line))