删除ASPX表单并避免使用硒

时间:2019-03-01 19:22:06

标签: python selenium web-scraping python-requests python-requests-html

我之前问过(请参阅here)如何从ASPX表单中抓取结果。该表单将输出显示在新选项卡中(通过使用JS中的函数window.open)。在我以前的帖子中,我没有提出正确的POST请求,而我解决了这个问题。

以下代码成功地从表单中检索了带有正确请求标头的HTML代码,它与我在Chrome检查器中看到的POST响应完全相同。但是(...)我无法检索数据。用户做出选择后,将打开一个新的弹出窗口,但我无法捕获它。弹出窗口具有新的URL,其信息不属于请求响应正文。

请求网址:https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx

弹出网址[我要下载的数据]:https://apps.neb-one.gc.ca/CommodityStatistics/ViewReport.aspx

url = 'https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx'

with requests.Session() as s:
        s.headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36",
            "Content-Type": "application/x-www-form-urlencoded",
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
            "Referer": "https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx",
            "Accept-Encoding": "gzip, deflate, br",
            "Accept-Language": "en-US,en;q=0.9"
        }

        response = s.get(url)
        soup = BeautifulSoup(response.content, 'html5lib')

        data = { tag['name']: tag['value'] 
            for tag in soup.select('input[name^=ctl00]') if tag.get('value')
            }
        state = { tag['name']: tag['value'] 
                for tag in soup.select('input[name^=__]')
            }

        payload = data.copy()
        payload.update(state)

        payload.update({
            "ctl00$MainContent$rdoCommoditySystem": "ELEC",
            "ctl00$MainContent$lbReportName": '76',
            "ctl00$MainContent$rdoReportFormat": 'PDF',
            "ctl00$MainContent$ddlStartYear": "2008",
            "__EVENTTARGET": "ctl00$MainContent$rdoCommoditySystem$2"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value'] 
                 for tag in soup.select('input[name^=__]')
             }

        payload.pop("ctl00$MainContent$ddlStartYear")
        payload.update(state)
        payload.update({
            "__EVENTTARGET": "ctl00$MainContent$lbReportName",
            "ctl00$MainContent$lbReportName": "171",
            "ctl00$MainContent$ddlFrom": "01/12/2018 12:00:00 AM"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value']
                 for tag in soup.select('input[name^=__]')
                }

        payload.update(state)
        payload.update({
            "ctl00$MainContent$ddlFrom": "01/10/1990 12:00:00 AM",
            "ctl00$MainContent$rdoReportFormat": "HTML",
            "ctl00$MainContent$btnView": "View"
        })

        print(payload['__VIEWSTATE'])

        response = s.post(url, data=payload, allow_redirects=True)
        print(response.text)

有什么方法可以使用requestsbs4从弹出窗口中检索数据?我注意到html-requests可以解析和呈现JS,但是我所有的尝试都没有成功。

URL源显示了此JS代码,我想这是打开带有数据的弹出窗口的代码:


//<![CDATA[
window.open("ViewReport.aspx", "_blank");Sys.Application.initialize();
//]]>

但是我无法访问它。

1 个答案:

答案 0 :(得分:0)

查看这个令人毛骨悚然的博客https://blog.scrapinghub.com/2016/04/20/scrapy-tips-from-the-pros-april-2016-edition

我过去曾经使用过这个概念来抓取aspx页面。