从python中的一个网站中的多个html表中搜索数据

时间:2014-09-21 13:24:50

标签: python html parsing web beautifulsoup

我正试图从这个网站获得一个时间序列到python:http://www.boerse-frankfurt.de/en/etfs/db+x+trackers+msci+world+information+technology+trn+index+ucits+etf+LU0540980496/price+turnover+history/historical+data#page=1

我已经走得很远,但不知道如何获取所有数据,而不仅仅是您可以在页面上看到的前50行。要在线查看它们,您必须单击表格底部的结果。我希望能够在python中指定开始日期和结束日期,并在列表中获取所有相应的日期和价格。以下是我到目前为止的情况:

 from bs4 import BeautifulSoup
 import requests
 import lxml
 import re

 url = 'http://www.boerse-frankfurt.de/en/etfs/db+x+trackers+msci+world+information+technology+trn+index+ucits+etf+LU0540980496/price+turnover+history/historical+data'
 soup = BeautifulSoup(requests.get(url).text)

 dates  = soup.findAll('td', class_='column-date')
 dates  = [re.sub('[\\nt\s]','',d.string) for d in dates]
 prices = soup.findAll('td', class_='column-price')
 prices = [re.sub('[\\nt\s]','',p.string) for p in prices]

1 个答案:

答案 0 :(得分:1)

您需要遍历其余页面。您可以使用POST请求来执行此操作。服务器期望在每个POST请求中接收结构。结构在中定义如下。页码是该结构的参数&#39; <&strong> 。该结构有几个参数我还没有测试但是尝试起来很有意思,比如 items_per_page max_time min_time 。下面是一个示例代码:

from bs4 import BeautifulSoup
import urllib
import urllib2
import re

url = 'http://www.boerse-frankfurt.de/en/parts/boxes/history/_histdata_full.m'
values = {'COMPONENT_ID':'PREeb7da7a4f4654f818494b6189b755e76', 
    'ag':'103708549', 
    'boerse_id': '12',
    'include_url': '/parts/boxes/history/_histdata_full.m',
    'item_count': '96',
    'items_per_page': '50',
    'lang': 'en',
    'link_id': '',
    'max_time': '2014-09-20',
    'min_time': '2014-05-09',
    'page': 1,
    'page_size': '50',
    'pages_total': '2',
    'secu': '103708549',
    'template': '0',
    'titel': '',
    'title': '',
    'title_link': '',
    'use_external_secu': '1'}

dates = []
prices = []
while True:
    data = urllib.urlencode(values)
    request = urllib.urlopen(url, data)
    soup = BeautifulSoup(request.read())
    temp_dates  = soup.findAll('td', class_='column-date')
    temp_dates  = [re.sub('[\\nt\s]','',d.string) for d in temp_dates]
    temp_prices = soup.findAll('td', class_='column-price')
    temp_prices = [re.sub('[\\nt\s]','',p.string) for p in temp_prices]
    if not temp_prices:
        break
    else:
        dates = dates + temp_dates
        prices = prices + temp_prices
        values['page'] += 1