Python Beautifulsoup4解析多个表

时间:2017-01-10 10:43:27

标签: python-3.x beautifulsoup

我一直试图解析这个网站的多个表格,即 http://www.cboe.com/strategies/vix/optionsintro/part5.aspx

这是我的代码:

import bs4 as bs
import pandas as pd
from urllib.request import Request, urlopen
req = Request('http://www.cboe.com/strategies/vix/optionsintro/part5.aspx', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = bs.BeautifulSoup(webpage,'lxml')
table = soup.findAll('table',{'class':'table  oddeven  center  padvertical  cellborders  mobile-load'})
table_rows = table.find_all('tr')
for tr in table_rows:
    td = tr.find_all('td')
    row = [i.text for i in td]
    print(row)

然而,它不断产生这条消息:

AttributeError: 'ResultSet' object has no attribute 'find_all'

有人会介意帮我吗?

2 个答案:

答案 0 :(得分:1)

table = soup.findAll('table',{'class':'table  oddeven  center  padvertical  cellborders  mobile-load'})

这将返回一个表格标签列表,它是list objectResultSetfind_all()只能在标记对象中使用

import bs4 as bs

from urllib.request import Request, urlopen
req = Request('http://www.cboe.com/strategies/vix/optionsintro/part5.aspx', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = bs.BeautifulSoup(webpage,'lxml')
for tr in soup.find_all('tr'):
    td = [td for td in tr.stripped_strings]
    print(td)

出:

['Bid', 'Ask']
['VIX Dec 10 Call', '6.40', '6.80']
['VIX Dec 15 Call', '2.70', '2.90']
['VIX Dec 16 Call', '2.30', '2.40']
['VIX Dec 17 Call', '1.80', '1.90']
['VIX Dec 18 Call', '1.45', '1.55']
['VIX Dec 19 Call', '1.15', '1.25']
['VIX Dec 20 Call', '0.95', '1.00']
['Bid', 'Ask']
['VIX Dec 10 Call', '9.30', '9.70']
['VIX Dec 15 Call', '4.90', '5.20']
['VIX Dec 16 Call', '4.30', '4.60']
['VIX Dec 17 Call', '3.70', '3.90']
['VIX Dec 18 Call', '3.10', '3.30']
['VIX Dec 19 Call', '2.65', '2.75']
['VIX Dec 20 Call', '2.25', '2.35']

本网站仅包含两个我们需要的表格,因此只需找到所有tr将返回我们需要的信息。

答案 1 :(得分:0)

以下代码适用于我:

import urllib2
from bs4 import BeautifulSoup
opener = urllib2.build_opener()
opener.addheaders = [('User-Agent', 'Mozilla/5.0')]
url = 'http://www.cboe.com/strategies/vix/optionsintro/part5.aspx'
response = opener.open(url)
soup = BeautifulSoup(response, "lxml")

tables = soup.findAll('table',{'class':'table  oddeven  center  padvertical  cellborders  mobile-load'})
for table in tables:
    table_rows = table.find_all('tr')
    for tr in table_rows:
        td = tr.find_all('td')
        row = [i.text for i in td]
        print(row)