使用无法正常工作的请求进行python网络抓取表单数据

时间:2019-10-13 16:51:56

标签: python post web-scraping python-requests

我正在尝试使用request.session将输入数据发布到表单中,并返回500状态。 我希望看到检索到的搜索结果。

在Bertrand Martel的帮助下,我能够解决__RequestVerificationToken和cookie的先前登录问题。我过程的下一步是获取“搜索”页面,该页面可以成功获得。现在,当我尝试将数据发布到构成搜索条件的表单的日期字段中时失败了。当我手动填写表格并按提交时,该方法有效。对我来说,一切似乎都非常简单,但不确定为什么它不起作用。它仍然是cookie问题吗?任何帮助将不胜感激。

这是我的代码:

import requests
from bs4 import BeautifulSoup

EMAIL = 'myemail@gmail.com'
PASSWORD = 'somepwd'
LOGIN_URL = 'https://www.idocmarket.com/Security/LogOn'
SEARCH_URL = 'https://www.idocmarket.com/RIOCO/Document/Search'


s = requests.Session()
s.get(LOGIN_URL)

result = s.post(LOGIN_URL, data = {
    "Login.Username": EMAIL,
    "Login.Password": PASSWORD
})
soup = BeautifulSoup(result.text, "html.parser")
# Report successful login
print("Login succeeded: ", result.ok)
print("Status code:", result.status_code)

result = s.get(SEARCH_URL)
auth_token  = soup.find("input", {'name': '__RequestVerificationToken'}).get('value')
print('auth token:', auth_token )
print("Get Search succeaeded: ", result.ok)
print("get Search Statusa code:", result.status_code)
result = s.post(SEARCH_URL, data = {
    "__RequestVerificationToken": auth_token,
    "StartRecordDate": "03/01/2019",
    "EndRecordDate": "03/31/2019",
    "StartDocNumber": "",
    "EndDocNumber": "",
    "Book": "",
    "Page": "",
    "Instrument": "",
    "InstrumentGroup": "",
    "PartyType": "Either",
    "PartyMatchType": "Contains",
    "PartyName": "",
    "Subdivision": "",
    "StartLot": "",
    "EndLot": "",
    "Block": "",
    "Section":"",
    "Township": "",
    "Range": "",
    "Legal": "",
    "CountyKey": "RIOCO"
})
print("post Dates succeeded: ", result.ok)
print("post Dates Status code:", result.status_code)
print(result.text)

1 个答案:

答案 0 :(得分:1)

这一次似乎需要在帖子中使用xsrf令牌以及所有现有参数。一个简单的解决方案是获取所有输入值并将其传递给请求:

import requests
from bs4 import BeautifulSoup

LOGIN_URL = 'https://www.idocmarket.com/Security/LogOn'
SEARCH_URL = 'https://www.idocmarket.com/RIOCO/Document/Search'
EMAIL = 'myemail@gmail.com'
PASSWORD = 'somepwd'

s = requests.Session()
s.get(LOGIN_URL)

r = s.post(LOGIN_URL, data = {
    "Login.Username": EMAIL,
    "Login.Password": PASSWORD
})

if (r.status_code == 200):
    r = s.get(SEARCH_URL)
    soup = BeautifulSoup(r.text, "html.parser")
    payload = {}
    for input_item in soup.select("input"):
        if input_item.has_attr('name'):
            payload[input_item["name"]] = input_item["value"]
    payload["StartRecordDate"] = '09/01/2019'
    payload["EndRecordDate"] = '09/30/2019'
    r = s.post(SEARCH_URL, data = payload)
    soup = BeautifulSoup(r.text, "html.parser")
    print(soup)
else:
    print("authentication failure")

还使用有效载荷的理解列表来编写:

temp_pl = [
    (t['name'], t['value']) 
    for t in soup.select("input")
    if t.has_attr('name')
]
payload = dict(temp_pl)
payload["StartRecordDate"] = '09/01/2019'
payload["EndRecordDate"] = '09/30/2019'