import mechanize
br = mechanize.Browser()
url = 'http://nseindia.com'
br.oprn(url)
,错误是
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in
open
return self._mech_open(url, data, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in
_mech_open
raise response
httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt
我尝试所有想法......
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.
1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.set_handle_equiv(False)
br.set_handle_equiv(False)
答案 0 :(得分:1)
您需要传递Accept
标题:
import mechanize
br = mechanize.Browser()
br.addheaders = [
('User-Agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8')
]
url = 'http://nseindia.com'
br.open(url)
然后,只是为了证明它正在运行,用BeautifulSoup
解析响应并获取页面标题:
soup = BeautifulSoup(br.response())
print soup.title.text
打印:
NSE - National Stock Exchange of India Ltd.