BeautifulSoup刮表id与python

时间:2018-01-02 08:25:14

标签: python web-scraping beautifulsoup

我是新手,我正在学习使用BeautifulSoup,但我在刮桌子时遇到了麻烦。对于我试图解析的HTML:

<table id="ctl00_mainContent_DataList1" cellspacing="0" > style="width:80%;border-collapse:collapse;"> == $0
    <tbody>
        <tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
        <tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
        <tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
        <tr><td><table width="90%" cellpadding="5" cellspacing="0">...</table></td></tr>
        ...

我的代码:

from urllib.request import urlopen
from bs4 import BeautifulSoup

quote_page = 'https://www.bcdental.org/yourdentalhealth/findadentist.aspx'
page = urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')

table = soup.find('table', id="ctl00_mainContent_DataList1")
rows = table.findAll('tr')

我得到AttributeError: 'NoneType' object has no attribute 'findAll'。我正在使用python 3.6和jupyter笔记本,以防万一。

编辑: 我尝试解析的表格数据仅在请求搜索后显示在页面上(在city字段中,选择Burnaby,然后点击搜索)。表格ctl00_mainContent_DataList1是提交搜索后显示的牙医列表。

1 个答案:

答案 0 :(得分:2)

首先:我使用requests,因为使用Cookie,标题等更容易。

页面由ASP.net生成,它会发送您必须在__VIEWSTATE请求中发送的值__VIEWSTATEGENERATOR__EVENTVALIDATIONPOST

您必须使用GET加载页面,然后才能获得这些值 您也可以使用request.Session()来获取可能需要的Cookie。

接下来,您必须复制值并从表单添加参数并使用POST发送。

在代码中我只放置了总是发送的参数。

'526'Vancouver的代码。您可以在<select>标记中找到其他代码 如果您想要其他选项,则可能需要添加其他参数。

即。 ctl00$mainContent$chkUndr4Ref: on 适用于Children: 3 & Under - Diagnose & Refer

编辑,因为<tr><table>find_all('tr'),因此tr会返回太多元素(外部tr和内部and later)和give the same find_all('td')many times. I changed td into find_all('tr')import requests from bs4 import BeautifulSoup url = 'https://www.bcdental.org/yourdentalhealth/findadentist.aspx' # --- session --- s = requests.Session() # to automatically copy cookies #s.headers.update({'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0'}) # --- GET request --- # get page to get cookies and params response = s.get(url) soup = BeautifulSoup(response.text, 'html.parser') # --- set params --- params = { # session - copy from GET request #'EktronClientManager': '', #'__VIEWSTATE': '', #'__VIEWSTATEGENERATOR': '', #'__EVENTVALIDATION': '', # main options 'ctl00$terms': '', 'ctl00$mainContent$drpCity': '526', 'ctl00$mainContent$txtPostalCode': '', 'ctl00$mainContent$drpSpecialty': 'GP', 'ctl00$mainContent$drpLanguage': '0', 'ctl00$mainContent$drpSedation': '0', 'ctl00$mainContent$btnSearch': '+Search+', # other options #'ctl00$mainContent$chkUndr4Ref': 'on', } # copy from GET request for key in ['EktronClientManager', '__VIEWSTATE', '__VIEWSTATEGENERATOR', '__EVENTVALIDATION']: value = soup.find('input', id=key)['value'] params[key] = value #print(key, ':', value) # --- POST request --- # get page with table - using params response = s.post(url, data=params)#, headers={'Referer': url}) soup = BeautifulSoup(response.text, 'html.parser') # --- data --- table = soup.find('table', id='ctl00_mainContent_DataList1') if not table: print('no table') #table = soup.find_all('table') #print('count:', len(table)) #print(response.text) else: for row in table.find_all('table'): for column in row.find_all('td'): text = ', '.join(x.strip() for x in column.text.split('\n') if x.strip()).strip() print(text) print('-----') find_all('table')`它应该会停止重复数据。

Map
Dr. Kashyap Vora, 6145 Fraser Street, Vancouver  V5W 2Z9
604 321 1869, www.voradental.ca
-----
Map
Dr. Niloufar Shirzad, Harbour Centre DentalL19 - 555 Hastings Street West, Vancouver  V6B 4N6
604 669 1195, www.harbourcentredental.com
-----
Map
Dr. Janice Brennan, 902 - 805 Broadway West, Vancouver  V5Z 1K1
604 872 2525
-----
Map
Dr. Rosemary Chang, 1240 Kingsway, Vancouver  V5V 3E1
604 873 1211
-----
Map
Dr. Mersedeh Shahabaldine, 3641 Broadway West, Vancouver  V6R 2B8
604 734 2114, www.westkitsdental.com
-----

部分结果:

scrapy