我试图通过urllib.request.urlopen
获取ulr状态,在某些情况下,它会返回urllib.error.URLError: HTTP Error 403: Forbidden
我是否可以成功从浏览器中打开此网址。
是否有可能用urllib
或更好的方法来解决这个问题,以便使用其他一些lib?
def urllib_status(url):
REQUEST_TIMEOUT = 10
if 'http' not in url:
url = 'http://' + url
try:
response = urllib.request.urlopen(url, timeout=REQUEST_TIMEOUT)
return response.status
except urllib.error.URLError as e:
print('url:'+url)
print('urllib.error.URLError:', e)
return -1
except ssl.SSLError as e:
print('url:'+url)
print('ssl.SSLError:', e)
return -1
except socket.error as e:
print('url:'+url)
print("socket.error: ", e)
return -1
答案 0 :(得分:1)
问题可能是由于网站不接受非浏览器请求。您可以通过覆盖请求中的User-Agent
标头来解决此问题(默认为Python-urllib/3.X
)。
来自Python docs:
import urllib.request
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
opener.open('http://www.example.com/')
或者,如果您正在使用requests
(Python用户中事实上的标准HTTP库)
import requests
requests.get('http://www.example.com/', headers={'User-agent': 'Mozilla/5.0'})
答案 1 :(得分:0)
使用requests
:
def url_status(url):
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0)'
' Gecko/20100101 Firefox/24.0'}
REQUEST_TIMEOUT = 10
if 'http' not in url:
url = 'http://' + url
try:
response = requests.get(url, headers=headers, timeout=REQUEST_TIMEOUT)
if(response.status_code != 200):
print(url)
print('status',response.status_code)
return response.status_code
except Exception as e:
print(url)
print('Error',e)
return -1