我试图从ESPN中抓取一些时间表:http://www.espn.com/nba/schedule/_/date/20171001
import requests
import bs4
response = requests.get('http://www.espn.com/nba/schedule/_/date/20171001')
soup = bs4.BeautifulSoup(response.text, 'lxml')
print soup.prettify()
table = soup.find_all('table')
data = []
for i in table:
rows = i.find_all('tr')
for row in rows:
cols = row.find_all('td')
cols = [col.text.strip() for col in cols]
data.append([col for col in cols if col])
我的代码工作正常,但输出缺少日期信息:
[
"Phoenix PHX",
"Utah UTAH",
"394 tickets available from $6"
],
[],
[
"Miami MIA",
"Orlando ORL",
"1,582 tickets available from $12"
]
经过一番调查后,我发现日期和时间信息都包含在标签中,如下所示:
<td data-behavior="date_time" data-date="2017-10-07T23:00Z"><a data-dateformat="time1" href="/nba/game?gameId=400978807" name="&lpos=nba:schedule:time"></a></td>
我也会不时在其他网站上看到这一点,但从未真正理解为什么他们这样做。如何在开放标记内提取文本以获得&#34; 2017-10-07T23:00Z&#34;在我的输出中?
答案 0 :(得分:1)
attrs属性包含一个可用于获取值的属性字典,我添加了一个长度检查,因为存在一些空行。
您可以尝试修改for循环,如下所示:
for i in table:
rows = i.find_all('tr')
for row in rows:
cols = row.find_all('td')
date_data = None
if len(cols) > 2:
date_data = cols[2].attrs['data-date']
cols = [col.text.strip() for col in cols]
dat = [col for col in cols if col]
if date_data:
dat.append(date_data)
data.append(dat)
PS:上面的代码片段可以优化: - )
答案 1 :(得分:1)
该表中的一些td
代码包含attributes。您可以通过调用td
返回dict
来访问attrs()
的属性:
>>> td = soup.select('tr')[1].select('td')[2]
>>> td
<td data-behavior="date_time" data-date="2017-10-01T22:00Z"><a data-dateformat="time1" href="/nba/game?gameId=400978817" name="&lpos=nba:schedule:time"></a></td>
>>> td.attrs
{'data-date': '2017-10-01T22:00Z', 'data-behavior': 'date_time'}
>>> td.attrs['data-date']
'2017-10-01T22:00Z'
为此,您可以创建一个函数,如果该属性存在,则返回日期或仅返回td
的文本:
import requests
import bs4
def date_or_text(td):
if 'data-date' in td.attrs:
return td.attrs['data-date']
return td.text
def extract_game_information(tr):
tds_with_blanks = (date_or_text(td) for td in tr.select('td'))
return [data for data in tds_with_blanks if data]
response = requests.get('http://www.espn.com/nba/schedule/_/date/20171001')
soup = bs4.BeautifulSoup(response.text, 'lxml')
data = [extract_game_information(tr) for tr in soup.select('tr')]