POST_DATA后没有抓取响应页面 - Beautiful Soup&蟒

时间:2017-05-17 12:17:57

标签: python html web-scraping beautifulsoup

我正在尝试使用以下代码将数据发布到表单上来抓取网页。

import bs4 as bs
import urllib.request
import requests
import webbrowser
import urllib.parse

url_for_parse = "http://demo.testfire.net/feedback.aspx"
#PARSE THE WEBPAGE
sauce = urllib.request.urlopen(url_for_parse).read()
soup = bs.BeautifulSoup(sauce,"html.parser")

#GET FORM ATTRIBUTES
form = soup.find('form')
action_value = form.get('action')
method_value = form.get('method')
id_value = form.get('id')

#POST DATA
payload = {'txtSearch':'HELLOWORLD'}
r = requests.post(url_for_parse, payload)

#PARSING ACTION VALUE WITH URL
url2 = urllib.parse.urljoin(url_for_parse,action_value)

#READ RESPONSE
response = urllib.request.urlopen(url2)
page_source = response.read()
with open("results.html", "w") as f:
    f.write(str(page_source))

searchfile = open("results.html", "r")
for line in searchfile:
    if "HELLOWORLD" in line: 
        print ("STRING FOUND")
    else:
        print ("STRING NOT FOUND")  
searchfile.close()  

代码是对的。响应网页已成功删除并存储在results.html中。

但是,我想在执行post_data后抓取网页。因为每次运行代码时我都会得到结果:String Not Found。这意味着在执行post_data之前会抓取生成的页面。

如何修改代码,例如表格已成功提交,而源代码存储在本地文件中。

是否建议使用备用框架而不是上述过程的漂亮脚本?

3 个答案:

答案 0 :(得分:1)

你正在做的很明显。

1) You are posting some data to a URL
2) Scraping the same URL.
3) Check for some "String"

但是你应该做些什么。

1) Post data to a URL
2) Scrape the resultant page (Not the same URL) and store in the file
3) Check for some "String"

为此,您需要将r.content写入本地文件并搜索字符串

修改如下代码:

 payload = {'txtSearch':'HELLOWORLD'}
 url2 = urllib.parse.urljoin(url_for_parse,action_value)
 r = requests.post(url2, auth = {"USERNAME", "PASSWORD"}, payload)

  with open("results.html", "w") as f:
        f.write(str(r.content))

//Then continue searching for a String. 

注意:您需要将有效负载发送到url2而不是初始URL(url_for_parse)

答案 1 :(得分:0)

在您的requests.post调用之后返回的响应将是您要通过的HTML。您可以通过

访问它
The expression used for the calculated field '=SUM(IIF(Fields!Booked_Flag.Value="Y",1,0)) / SUM(Fields!Booked_Flag.Value)' includes an aggregate function. Aggregate functions cannot be used in calculated field expressions.

然而,通过对此的测试,它说我没有经过身份验证,所以我假设您已通过身份验证?

我还建议完全使用请求,而不是使用urllib进行GET和请求POST。

答案 2 :(得分:0)

在您的请求中保留会话参数可能是个好主意。

http://docs.python-requests.org/en/master/user/advanced/#session-objects

import requests

proxies = {
    "http": "",
    "https": "",
}

headers = {
        'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'
}

data = {'item':'content'}
## not that you need basic auth but its simple to toss in requests
auth = requests.auth.HTTPBasicAuth('fake@example.com', 'not_a_real_password') 
s = requests.session()
s.headers.update(headers)
s.proxies.update(proxies)
response = s.post(url=url, data=data, auth=auth)

这个关键位实际上是你在调用然后等待

<form name="cmt" method="post" action="comment.aspx">

这只是http://demo.testfire.net/comment.aspx

的帖子