我正在尝试使用rlsnet.ru处的搜索表单。这是我从源文件中提取的表单定义:
<form id="site_search_form" action="/search_result.htm" method="get">
<input id="simplesearch_text_input" class="search__field" type="text" name="word" value="" autocomplete="off">
<input type="hidden" name="path" value="/" id="path">
<input type="hidden" name="enter_clicked" value="1">
<input id="letters_id" type="hidden" name="letters" value="">
<input type="submit" class="g-btn search__btn" value="Найти" id="simplesearch_button">
<div class="sf_suggestion">
<ul style="display: none; z-index:1000; opacity:0.85;">
</ul>
</div>
<div id="contentsf">
</div>
</form>
以下是我用来发送搜索请求的代码:
import requests
from urllib.parse import urlencode
root = "http://www.rlsnet.ru/search_result.htm?"
response = requests.get(root + urlencode({"word": "Церебролизин".encode('cp1251')})
每次执行此操作时,响应状态为403.当我在Safari / Chrome / Opera中输入相同的请求URL(即http://www.rlsnet.ru/search_result.htm?word=%D6%E5%F0%E5%E1%F0%EE%EB%E8%E7%E8%ED
)时,它可以正常工作并返回预期的页面。我究竟做错了什么?谷歌搜索这个问题只带来了这个问题:why url works in browser but not using requests get method,这没什么用处。
答案 0 :(得分:11)
那是因为User-Agent
的默认requests
是python-requests/2.13.0
,而且在您的情况下,网站不喜欢来自&#34;非浏览器的流量&#34} #34;,所以他们试图阻止这种流量。
>>> import requests
>>> session = requests.Session()
>>> session.headers
{'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'python-requests/2.13.0'}
您需要做的就是让请求看起来像是来自浏览器,所以只需添加一个额外的header
参数:
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'} # This is chrome, you can set whatever browser you like
response = requests.get('http://www.rlsnet.ru/search_result.htm?word=%D6%E5%F0%E5%E1%F0%EE%EB%E8%E7%E8%ED', headers=headers)
print response.status_code
print response.url
200
http://www.rlsnet.ru/search_result.htm?word=%D6%E5%F0%E5%E1%F0%EE%EB%E8%E7%E8%ED