使用python(HKEX)刮擦.aspx页面

时间:2018-03-01 03:16:31

标签: python asp.net web-scraping

我想废弃以下网站:http://www.hkexnews.hk/listedco/listconews/advancedsearch/search_active_main_c.aspx

我正在使用python2.7,这是我的代码:

import urllib
from bs4 import BeautifulSoup

headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-GB,en;q=0.9,en-US;q=0.8,zh-TW;q=0.7,zh;q=0.6,zh-CN;q=0.5',}

class MyOpener(urllib.FancyURLopener):
    version = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'

myopener = MyOpener()
url = 'http://www.hkexnews.hk/listedco/listconews/advancedsearch/search_active_main_c.aspx'

f = myopener.open(url)
soup_dummy = BeautifulSoup(f,"html5lib")

viewstate = soup_dummy.select("#__VIEWSTATE")[0]['value']
viewstategen = soup_dummy.select("#__VIEWSTATEGENERATOR")[0]['value']

soup_dummy.find(id="aspnetForm")


formData = (
    ('__VIEWSTATE', viewstate),
    ('__VIEWSTATEGENERATOR', viewstategen),
    ('ctl00$txt_stock_code', '00005')
)

encodedFields = urllib.urlencode(formData)
# second HTTP request with form data
f = myopener.open(url, encodedFields)
soup = BeautifulSoup(f,"html5lib")
date = soup.find("span", id="lbDateTime")
print(date)

无法收集任何内容。 当我运行此代码时,它显示“无”。 如果我将print(date)更改为print(date.text) 发生错误:AttributeError:'NoneType'对象没有属性'text'

1 个答案:

答案 0 :(得分:1)

你的问题有点模糊,但这是我的尝试:

运行代码会给我以下回复:The page requested may have been relocated, renamed or removed from the Hong Kong Exchanges and Clearing Limited, or HKEX, website.

此外,我没有看到任何span id等于lbDateTime。但我确实看到以lbDateTime结尾的span id。如果您没有收到此类错误,可以尝试这样做:dates = soup.findAll("span", {"id": lambda L: L and L.endswith('lbDateTime')})

(来源:https://stackoverflow.com/a/14257743/942692

如果您确实收到相同的回复,则需要修复您的请求。我对urllib不熟悉,所以我无法帮助你,但如果你能够使用requests库,那么这里有一些适合我的代码:(dates返回一个包含20个元素的ResultSet对象

import requests
from bs4 import BeautifulSoup

headers = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36',
    'Content-Type': 'application/x-www-form-urlencoded',
    'Accept-Encoding': 'gzip, deflate',
    'Accept-Language': 'en-GB,en;q=0.9,en-US;q=0.8,zh-TW;q=0.7,zh;q=0.6,zh-CN;q=0.5'}

session = requests.session()
response = session.get('http://www.hkexnews.hk/listedco/listconews/advancedsearch/search_active_main_c.aspx', headers={
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'})
soup = BeautifulSoup(response.content, 'html.parser')
form_data = {
    '__VIEWSTATE': soup.find('input', {'name': '__VIEWSTATE'}).get('value'),
    '__VIEWSTATEGENERATOR': soup.find('input', {'name': '__VIEWSTATEGENERATOR'}).get('value'),
    '__VIEWSTATEENCRYPTED': soup.find('input', {'name': '__VIEWSTATEENCRYPTED'}).get('value') 
}
f = session.post('http://www.hkexnews.hk/listedco/listconews/advancedsearch/search_active_main_c.aspx', data=form_data,
                 headers=headers)
soup = BeautifulSoup(f.content, 'html.parser')
dates = soup.findAll("span", {"id": lambda L: L and L.endswith('lbDateTime')})
相关问题