Python webscraping不能使用BeautifulSoup显示所有行

时间:2019-05-08 07:24:12

标签: python web-scraping beautifulsoup xml-parsing html-parsing

试图从Transfermarkt抓取几个网页的班级概述,并意识到某些页面上缺少行。

以下是两个示例网页:

作品:所有行均包含here

不起作用:缺少行here

from bs4 import BeautifulSoup as bs
import requests
import pandas as pd

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
r = requests.get('https://www.transfermarkt.com/grasshopper-club-zurich-u17/kader/verein/59526/saison_id/2018/plus/1', headers = headers)
soup = bs(r.content, 'html.parser')

position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item['title'].lstrip(': '), item['alt']])  for item in soup.select('.zentriert:nth-of-type(8):not([id]) [title]')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)

df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\grasshopper18.csv')

这就是我要包含22行的页面的结果。

  position_number  ... contract_until
0               -  ...              -
1               -  ...              -
2               -  ...              -
3               -  ...              -
4               -  ...              -
5               -  ...              -
6               -  ...              -
7               -  ...              -
8               -  ...     30.06.2019

[9 rows x 10 columns]

Process finished with exit code 0

我无法确定为什么它对某些人有效,而对某些人无效。 任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:1)

问题出在这一行:

signed_from = ['/'.join([item['title'].lstrip(': '), item['alt']])  for item in soup.select('.zentriert:nth-of-type(8):not([id]) [title]')]

,您可以通过以下方式进行修改:

signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']])  if item.find('a') else '' for item in soup.select('.zentriert:nth-of-type(8):not([id])')]