刮困难表

时间:2019-03-16 14:55:49

标签: python web-scraping beautifulsoup

很长时间以来,我一直在尝试从here抓取一张桌子,但是没有成功。我要抓取的表格标题为“每场比赛统计信息的团队人数”。我有信心,一旦我能够抓取该表的一个元素,便可以遍历列表中想要的列,并最终得到一个熊猫数据框。

到目前为止,这是我的代码:

from bs4 import BeautifulSoup
import requests

# url that we are scraping
r = requests.get('https://www.basketball-reference.com/leagues/NBA_2019.html')
# Lets look at what the request content looks like
print(r.content)

# use Beautifulsoup on content from request
c = r.content
soup = BeautifulSoup(c)
print(soup)

# using prettify() in Beautiful soup indents HTML like it should be in the web page
# This can make reading the HTML a little be easier
print(soup.prettify())

# get elements within the 'main-content' tag
team_per_game = soup.find(id="all_team-stats-per_game")
print(team_per_game)

任何帮助将不胜感激。

2 个答案:

答案 0 :(得分:6)

该网页采用了一种技巧,试图阻止搜索引擎和其他自动化Web客户端(包括抓取工具)查找表格数据:表格存储在HTML注释中:

<div id="all_team-stats-per_game" class="table_wrapper setup_commented commented">

<div class="section_heading">
  <span class="section_anchor" id="team-stats-per_game_link" data-label="Team Per Game Stats"></span><h2>Team Per Game Stats</h2>    <div class="section_heading_text">
      <ul> <li><small>* Playoff teams</small></li>
      </ul>
    </div>      
</div>
<div class="placeholder"></div>
<!--
   <div class="table_outer_container">
      <div class="overthrow table_container" id="div_team-stats-per_game">
  <table class="sortable stats_table" id="team-stats-per_game" data-cols-to-freeze=2><caption>Team Per Game Stats Table</caption>

...

</table>

      </div>
   </div>
-->
</div>

我注意到开头div具有setup_commentedcommented类。然后,浏览器将执行页面中包含的JavaScript代码,然后从这些注释中加载文本,并将placeholder div替换为新HTML内容,以供浏览器显示。

您可以在此处提取评论文本:

from bs4 import BeautifulSoup, Comment

soup = BeautifulSoup(r.content, 'lxml')
placeholder = soup.select_one('#all_team-stats-per_game .placeholder')
comment = next(elem for elem in placeholder.next_siblings if isinstance(elem, Comment))
table_soup = BeautifulSoup(comment, 'lxml')

然后继续解析表HTML。

该特定站点同时发布了terms of usea page on data use,如果您要使用它们的数据,可能应该阅读。具体来说,它们的条款在第6节中规定。网站内容

  

未经SRL事先书面同意,您不得构架,捕获,收获或收集本网站或内容的任何部分。

抓取数据将属于该标题。

答案 1 :(得分:1)

只需完成Martijn Pieters的答案(并且不带lxml)

from bs4 import BeautifulSoup, Comment
import requests

r = requests.get('https://www.basketball-reference.com/leagues/NBA_2019.html')
soup = BeautifulSoup(r.content, 'html.parser')
placeholder = soup.select_one('#all_team-stats-per_game .placeholder')
comment = next(elem for elem in placeholder.next_siblings if isinstance(elem, Comment))
table = BeautifulSoup(comment, 'html.parser')
rows = table.find_all('tr')
for row in rows:
    cells = row.find_all('td')
    if cells:
        print([cell.text for cell in cells])

部分输出

[u'New Orleans Pelicans', u'71', u'240.0', u'43.6', u'91.7', u'.476', u'10.1', u'29.4', u'.344', u'33.5', u'62.4', u'.537', u'18.1', u'23.9', u'.760', u'11.0', u'36.0', u'47.0', u'27.0', u'7.5', u'5.5', u'14.5', u'21.4', u'115.5']
[u'Milwaukee Bucks*', u'69', u'241.1', u'43.3', u'90.8', u'.477', u'13.3', u'37.9', u'.351', u'30.0', u'52.9', u'.567', u'17.6', u'22.8', u'.773', u'9.3', u'40.1', u'49.4', u'26.0', u'7.4', u'6.0', u'14.0', u'19.8', u'117.6']
[u'Los Angeles Clippers', u'70', u'241.8', u'41.0', u'87.6', u'.469', u'9.8', u'25.2', u'.387', u'31.3', u'62.3', u'.502', u'22.8', u'28.8', u'.792', u'9.9', u'35.7', u'45.6', u'23.4', u'6.6', u'4.7', u'14.5', u'23.5', u'114.6']