我想从内联网下载一系列pdf文件。我能够毫无问题地在我的网络浏览器中看到这些文件,但是当试图通过python自动提取文件时,我遇到了问题。在通过我办公室设置的代理进行交谈后,我可以使用answer轻松地从互联网上下载文件:
url = 'http://www.sample.com/fileiwanttodownload.pdf'
user = 'username'
pswd = 'password'
proxy_ip = '12.345.56.78:80'
proxy_url = 'http://' + user + ':' + pswd + '@' + proxy_ip
proxy_support = urllib2.ProxyHandler({"http":proxy_url})
opener = urllib2.build_opener(proxy_support,urllib2.HTTPHandler)
urllib2.install_opener(opener)
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.close()
但无论出于何种原因,如果网址指向我的内部网上的内容,它就无法工作。返回以下错误:
Traceback (most recent call last):
File "<ipython-input-13-a055d9eaf05e>", line 1, in <module>
runfile('C:/softwaredev/python/pdfwrite.py', wdir='C:/softwaredev/python')
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "C:/softwaredev/python/pdfwrite.py", line 26, in <module>
u = urllib2.urlopen(url)
File "C:\Anaconda\lib\urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Anaconda\lib\urllib2.py", line 442, in error
result = self._call_chain(*args)
File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Anaconda\lib\urllib2.py", line 629, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Anaconda\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Anaconda\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: Service Unavailable
在以下代码中使用requests.py
,我可以成功从互联网上下载文件,但在尝试从我的办公室内联网中提取pdf时,我只是在html中发回连接错误。运行以下代码:
import requests
url = 'www.intranet.sample.com/?layout=attachment&cfapp=26&attachmentid=57142'
proxies = {
"http": "http://12.345.67.89:80",
"https": "http://12.345.67.89:80"
}
local_filename = 'test.pdf'
r = requests.get(url, proxies=proxies, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
print chunk
if chunk:
f.write(chunk)
f.flush()
回来的html:
Network Error (tcp_error)
A communication error occurred: "No route to host"
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
For assistance, contact your network support team.
是否有可能存在某些网络安全设置阻止Web浏览器环境之外的自动请求?
答案 0 :(得分:1)
在urllib2中安装openers并不会影响请求。您需要使用请求&#39;自己对代理的支持。它应该足以将proxies
参数传递给get
,或者您可以设置HTTP_PROXY
和HTTPS_PROXY
环境变量。见http://docs.python-requests.org/en/latest/user/advanced/#proxies
import requests
proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}
requests.get("http://example.org", proxies=proxies)
答案 1 :(得分:0)
您是否尝试过在内网上使用代理下载文件?
你可以在python2中尝试这样的东西
from urllib2 import urlopen
url = 'http://intranet/myfile.pdf'
with open(local_filename, 'wb') as f:
f.write(urlopen(url).read())