带有身份验证的urllib.request.urlopen(url)

时间:2017-05-29 10:07:38

标签: python python-3.x url beautifulsoup request

我一直在玩美味的汤和解析网页几天。我一直在使用一行代码,这些代码在我编写的所有脚本中都是我的救星。代码行是:

1
2
3
4

但是......

我想用(打开带有身份验证的URL)做同样的事情:

r = requests.get('some_url', auth=('my_username', 'my_password')).

我无法打开网址并阅读需要身份验证的网页。 我如何实现这样的目标:

(1) sauce = urllib.request.urlopen(url).read() (1)
(2) soup = bs.BeautifulSoup(sauce,"html.parser") (2)

3 个答案:

答案 0 :(得分:12)

您正在使用HTTP Basic Authentication

import urllib2, base64

request = urllib2.Request(url)
base64string = base64.b64encode('%s:%s' % (username, password))
request.add_header("Authorization", "Basic %s" % base64string)   
result = urllib2.urlopen(request)

因此,您应base64对用户名和密码进行编码,并将其作为Authorization标题发送。

答案 1 :(得分:10)

查看官方文档中的HOWTO Fetch Internet Resources Using The urllib Package

# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()

# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "http://example.com/foo/"
password_mgr.add_password(None, top_level_url, username, password)

handler = urllib.request.HTTPBasicAuthHandler(password_mgr)

# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)

# use the opener to fetch a URL
opener.open(a_url)

# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)

答案 2 :(得分:-1)

使用urllib3

import urllib3

http = urllib3.PoolManager()
myHeaders = urllib3.util.make_headers(basic_auth='my_username:my_password')
http.request('GET', 'http://example.org', headers=myHeaders)