使用Python抓取.aspx表单

时间:2019-02-28 13:57:08

标签: python html web-scraping beautifulsoup python-requests

我正在尝试抓取:https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx,这在纸面上似乎是一项容易的任务,并且具有来自其他SO问题的大量资源。尽管如此,无论我如何更改请求,我都会遇到相同的错误。

我尝试了以下操作:

import requests
from bs4 import BeautifulSoup

url = "https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx"

with requests.Session() as s:
    s.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36'}

    response = s.get(url)
    soup = BeautifulSoup(response.content)

     data = {
         "ctl00$MainContent$rdoCommoditySystem": "ELEC",
         "ctl00$MainContent$lbReportName": "171",
         "ctl00$MainContent$ddlFrom": "01/11/2018 12:00:00 AM",
         "ctl00$MainContent$rdoReportFormat": "Excel",
         "ctl00$MainContent$btnView": "View",
         "__EVENTVALIDATION": soup.find('input', {'name':'__EVENTVALIDATION'}).get('value',''),
         "__VIEWSTATE": soup.find('input', {'name': '__VIEWSTATE'}).get('value', ''),
         "__VIEWSTATEGENERATOR": soup.find('input', {'name': '__VIEWSTATEGENERATOR'}).get('value', '')
     }

    response = requests.post(url, data=data)

当我打印response.contents对象时,得到以下消息(tl; dr,它表示“发生系统错误。系统将警告该问题的技术支持” ):

b'\r\n\r\n<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\r\n\r\n<html xmlns="http://www.w3.org/1999/xhtml" >\r\n<head><title>\r\n\r\n</title></head>\r\n<body>\r\n   <form name="form1" method="post" action="Error.aspx?ErrorID=86e0c980-7832-4fc5-b5a8-a8254dd8ad69" id="form1">\r\n<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUKMTg3NjI4NzkzNmRkaCA5IA9393/t2iMAptLYU1QiPc8=" />\r\n\r\n<input type="hidden" name="__VIEWSTATEGENERATOR" id="__VIEWSTATEGENERATOR" value="9D6BDE45" />\r\n    <div>\r\n        <h4>\r\n            <span id="lblError">Error</span>\r\n        </h4>\r\n        <span id="lblMessage" class="Validator"><font color="Black">System error occurred. The system will alert technical support of the problem.</font></span>\r\n    </div>\r\n    </form>\r\n</body>\r\n</html>\r\n'

我使用了其他选项,例如按照建议的here更改__EVENTTARGET参数,并将cookie从第一个请求传递到POST请求。检查页面的源代码后,我注意到该表单具有“查询”功能,需要__EVENTTARGET__EVENTARGUMENT才能起作用:

//<![CDATA[
var theForm = document.forms['aspnetForm'];
if (!theForm) {
    theForm = document.aspnetForm;
}
function __doPostBack(eventTarget, eventArgument) {
    if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
        theForm.__EVENTTARGET.value = eventTarget;
        theForm.__EVENTARGUMENT.value = eventArgument;
        theForm.submit();
    }
}
//]]>

但是POST响应的正文中两个参数均为空(可以在Chrome开发人员检查器中检查)。另一个问题是,我需要以任何格式(PDF或Excel)下载文件,或者获取HTML版本,但是.ASPX表单无法在同一页面中呈现信息,因此会打开一个新的网址:{ {3}},而不是信息。

我在这里迷路了,我想念什么?

1 个答案:

答案 0 :(得分:-1)

通过更加小心地处理__VIEWSTATE值,我能够成功解决此问题。在ASPX表单中,页面使用__VIEWSTATE来哈希网页的状态(即用户已经选择了表单的哪些选项,或者在我们的情况下是请求的用户),并允许下一个请求。

在这种情况下:

  1. 请求获取所有标头,将其存储在payload中,并通过更新字典来添加我的第一个选择。
  2. 再次发出一个具有更新后的__VIEWSTATE值的请求,并将更多选项添加到我的请求中。
  3. 与2相同,但添加了最终选项。

这将使我有五个HTML正文,当我使用浏览器发出请求时,却获得了相同的HTML正文,但是仍然无法显示数据,也不允许我将文件下载为上一个请求正文的一部分。 selenium可以解决此问题,但我没有成功。 SO中的This问题描述了我的问题。

url = 'https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx'

with requests.Session() as s:
        s.headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36",
            "Content-Type": "application/x-www-form-urlencoded",
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
            "Referer": "https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx",
            "Accept-Encoding": "gzip, deflate, br",
            "Accept-Language": "en-US,en;q=0.9"
        }

        response = s.get(url)
        soup = BeautifulSoup(response.content, 'html5lib')

        data = { tag['name']: tag['value'] 
            for tag in soup.select('input[name^=ctl00]') if tag.get('value')
            }
        state = { tag['name']: tag['value'] 
                for tag in soup.select('input[name^=__]')
            }

        payload = data.copy()
        payload.update(state)

        payload.update({
            "ctl00$MainContent$rdoCommoditySystem": "ELEC",
            "ctl00$MainContent$lbReportName": '76',
            "ctl00$MainContent$rdoReportFormat": 'PDF',
            "ctl00$MainContent$ddlStartYear": "2008",
            "__EVENTTARGET": "ctl00$MainContent$rdoCommoditySystem$2"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value'] 
                 for tag in soup.select('input[name^=__]')
             }

        payload.pop("ctl00$MainContent$ddlStartYear")
        payload.update(state)
        payload.update({
            "__EVENTTARGET": "ctl00$MainContent$lbReportName",
            "ctl00$MainContent$lbReportName": "171",
            "ctl00$MainContent$ddlFrom": "01/12/2018 12:00:00 AM"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value']
                 for tag in soup.select('input[name^=__]')
                }

        payload.update(state)
        payload.update({
            "ctl00$MainContent$ddlFrom": "01/10/1990 12:00:00 AM",
            "ctl00$MainContent$rdoReportFormat": "HTML",
            "ctl00$MainContent$btnView": "View"
        })

        print(payload['__VIEWSTATE'])

        response = s.post(url, data=payload, allow_redirects=True)
        print(response.text)