我正在尝试从此网站下载CSV文件,但是当我使用这段代码(以前几周工作时)或者我正在使用时,我一直收到一个HTML文件wget的。
url = "http://.....aspx"
file_name = "%s.csv" % url.split('/')[3]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
如何使用Python再次获取此文件?
谢谢
答案 0 :(得分:3)
使用Requests库而不是urllib2解决:
import requests
url = "http://www.....aspx?download=1"
file_name = "Data.csv"
u = requests.get(url)
file_size = int(u.headers['content-length'])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
with open(file_name, 'wb') as f:
for chunk in u.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
f.close()