使用python通过分页表刮取数据

时间:2015-02-06 13:22:49

标签: python beautifulsoup screen-scraping

我正在通过Google财经的历史页面搜索数据(http://www.google.com/finance/historical?q=NSE%3ASIEMENS&ei=PLfUVIDTDuSRiQKhwYGQBQ)。

我可以在当前页面上刮掉30行。我面临的问题是我无法清除表格中的其余数据(31-241行)。如何进入下一页或链接。 以下是我的代码:

import urllib2
import xlwt #to write into excel spreadsheet
from bs4 import BeautifulSoup

# Main Coding Section

stock_links = open('stock_link_list.txt', 'r')  #opening text file for reading

#url="https://www.google.com/finance/historical?q=NSE%3ASIEMENS&ei=zHXOVLPnApG2iALxxYCADQ"
for url in stock_links:
    OurFile = urllib2.urlopen(url)
    OurHtml = OurFile.read()
    OurFile.close()
soup = BeautifulSoup(OurHtml)
#soup1 = soup.find("div", {"class": "gf-table-wrapper sfe-break-bottom-16"}).get_text()
soup1 = soup.find("table", {"class": "gf-table historical_price"}).get_text()

end = url.index('&')
filename = url[47:end]
file = open(filename, 'w')  #opening text file for writing
file.write(soup1)
#file.write(soup1.get_text())   #writing to the text file
file.close()            #closing the text file

2 个答案:

答案 0 :(得分:1)

您必须对其进行微调,我会发现更具体的错误,但您可以继续增加start以获取下一个数据:

url = "https://www.google.com/finance/historical?q=NSE%3ASIEMENS&ei=W8LUVLHnAoOswAOFs4DACg&start={}&num=30"

from bs4 import BeautifulSoup
import  requests
# Main Coding Sectio
start = 0
while True:
    try:
        nxt = url.format(start)
        r = requests.get(nxt)
        soup = BeautifulSoup(r.content)
        print(soup.find("table",{"class": "gf-table historical_price"}).get_text())
    except Exception as e:
        print(e)
        break
    start += 30

这将获取所有表数据到最后一个日期feb 7:

......

Date
Open
High
Low
Close
Volume

Feb 7, 2014
552.60
557.90
548.25
551.50
119,711

答案 1 :(得分:0)

第一眼看到Row Limit选项允许每页显示最多30行,但我手动将查询字符串参数更改为更大的数字,并意识到我们每页最多可以查看200行

将网址更改为

https://www.google.com/finance/historical?q=NSE%3ASIEMENS&ei=OM3UVLFtkLnzBsjIgYAI&start=0&num=200

它将显示200行

然后更改start=200&num=400

但更合乎逻辑的是,如果你有许多其他的sunch类型的链接。

然后你可以刮掉Pagination区域,最后一个TR并抓住下一页的链接并刮掉