所以我正在尝试创建一个下载webcomics的Python脚本,并将它们放在桌面上的文件夹中。我在这里发现了一些类似的程序,但是没有什么比我需要的更好。我发现最相似的那个就在这里(http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images)。我尝试使用此代码:
>>> import urllib
>>> image = urllib.URLopener()
>>> image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg")
('00000001.jpg', <httplib.HTTPMessage instance at 0x1457a80>)
然后我在计算机上搜索了一个文件“00000001.jpg”,但我找到的只是缓存的图片。我甚至不确定它是否将文件保存到我的电脑上。一旦我理解了如何下载文件,我想我知道如何处理剩下的文件。基本上只是使用for循环并将字符串拆分为'00000000'。'jpg'并将'00000000'递增到最大数字,我必须以某种方式确定。有关最佳方法或如何正确下载文件的任何建议吗?
谢谢!
编辑6/15/10
这是完成的脚本,它将文件保存到您选择的任何目录中。由于一些奇怪的原因,文件没有下载,他们只是做了。任何关于如何清理它的建议都将非常感激。我目前正在研究如何在网站上找到许多漫画,这样我才能得到最新的漫画,而不是在引发一定数量的异常后退出程序。
import urllib
import os
comicCounter=len(os.listdir('/file'))+1 # reads the number of files in the folder to start downloading at the next comic
errorCount=0
def download_comic(url,comicName):
"""
download a comic in the form of
url = http://www.example.com
comicName = '00000000.jpg'
"""
image=urllib.URLopener()
image.retrieve(url,comicName) # download comicName at URL
while comicCounter <= 1000: # not the most elegant solution
os.chdir('/file') # set where files download to
try:
if comicCounter < 10: # needed to break into 10^n segments because comic names are a set of zeros followed by a number
comicNumber=str('0000000'+str(comicCounter)) # string containing the eight digit comic number
comicName=str(comicNumber+".jpg") # string containing the file name
url=str("http://www.gunnerkrigg.com//comics/"+comicName) # creates the URL for the comic
comicCounter+=1 # increments the comic counter to go to the next comic, must be before the download in case the download raises an exception
download_comic(url,comicName) # uses the function defined above to download the comic
print url
if 10 <= comicCounter < 100:
comicNumber=str('000000'+str(comicCounter))
comicName=str(comicNumber+".jpg")
url=str("http://www.gunnerkrigg.com//comics/"+comicName)
comicCounter+=1
download_comic(url,comicName)
print url
if 100 <= comicCounter < 1000:
comicNumber=str('00000'+str(comicCounter))
comicName=str(comicNumber+".jpg")
url=str("http://www.gunnerkrigg.com//comics/"+comicName)
comicCounter+=1
download_comic(url,comicName)
print url
else: # quit the program if any number outside this range shows up
quit
except IOError: # urllib raises an IOError for a 404 error, when the comic doesn't exist
errorCount+=1 # add one to the error count
if errorCount>3: # if more than three errors occur during downloading, quit the program
break
else:
print str("comic"+ ' ' + str(comicCounter) + ' ' + "does not exist") # otherwise say that the certain comic number doesn't exist
print "all comics are up to date" # prints if all comics are downloaded
答案 0 :(得分:219)
import urllib
urllib.urlretrieve("http://www.gunnerkrigg.com//comics/00000001.jpg", "00000001.jpg")
答案 1 :(得分:77)
import urllib
f = open('00000001.jpg','wb')
f.write(urllib.urlopen('http://www.gunnerkrigg.com//comics/00000001.jpg').read())
f.close()
答案 2 :(得分:53)
仅供记录,使用请求库。
import requests
f = open('00000001.jpg','wb')
f.write(requests.get('http://www.gunnerkrigg.com//comics/00000001.jpg').content)
f.close()
虽然它应该检查requests.get()错误。
答案 3 :(得分:22)
对于Python 3,您需要导入import urllib.request
:
import urllib.request
urllib.request.urlretrieve(url, filename)
了解更多信息,请查看link
答案 4 :(得分:14)
@DiGMi的Python 3版本答案:
from urllib import request
f = open('00000001.jpg', 'wb')
f.write(request.urlopen("http://www.gunnerkrigg.com/comics/00000001.jpg").read())
f.close()
答案 5 :(得分:10)
我找到了这个answer,我以更可靠的方式编辑
def download_photo(self, img_url, filename):
try:
image_on_web = urllib.urlopen(img_url)
if image_on_web.headers.maintype == 'image':
buf = image_on_web.read()
path = os.getcwd() + DOWNLOADED_IMAGE_PATH
file_path = "%s%s" % (path, filename)
downloaded_image = file(file_path, "wb")
downloaded_image.write(buf)
downloaded_image.close()
image_on_web.close()
else:
return False
except:
return False
return True
从这里你永远不会在下载时获得任何其他资源或例外。
答案 6 :(得分:7)
最简单的方法是使用.read()
读取部分或全部响应,然后将其写入您在已知位置打开的文件中。
答案 7 :(得分:6)
也许您需要&#39; User-Agent&#39;:
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.137 Safari/537.36')]
response = opener.open('http://google.com')
htmlData = response.read()
f = open('file.txt','w')
f.write(htmlData )
f.close()
答案 8 :(得分:5)
如果您知道这些文件位于网站dir
的同一目录site
中,并且格式如下:filename_01.jpg,...,filename_10.jpg然后下载所有这些文件:
import requests
for x in range(1, 10):
str1 = 'filename_%2.2d.jpg' % (x)
str2 = 'http://site/dir/filename_%2.2d.jpg' % (x)
f = open(str1, 'wb')
f.write(requests.get(str2).content)
f.close()
答案 9 :(得分:3)
除了建议您仔细阅读retrieve()
的文档(http://docs.python.org/library/urllib.html#urllib.URLopener.retrieve)之外,我建议实际上在响应内容上调用read()
,然后将其保存到文件中您的选择而不是将其留在检索创建的临时文件中。
答案 10 :(得分:3)
以上所有代码都不允许保留原始图像名称,有时这是必需的。 这有助于将图像保存到本地驱动器,保留原始图像名称
IMAGE = URL.rsplit('/',1)[1]
urllib.urlretrieve(URL, IMAGE)
Try this了解详情。
答案 11 :(得分:3)
这对我使用python 3很有用。
它从csv文件中获取一个URL列表,并开始将它们下载到一个文件夹中。如果内容或图像不存在,则会接受该异常并继续发挥其魔力。
import urllib.request
import csv
import os
errorCount=0
file_list = "/Users/$USER/Desktop/YOUR-FILE-TO-DOWNLOAD-IMAGES/image_{0}.jpg"
# CSV file must separate by commas
# urls.csv is set to your current working directory make sure your cd into or add the corresponding path
with open ('urls.csv') as images:
images = csv.reader(images)
img_count = 1
print("Please Wait.. it will take some time")
for image in images:
try:
urllib.request.urlretrieve(image[0],
file_list.format(img_count))
img_count += 1
except IOError:
errorCount+=1
# Stop in case you reach 100 errors downloading images
if errorCount>100:
break
else:
print ("File does not exist")
print ("Done!")
答案 12 :(得分:2)
根据 urllib.request.urlretrieve — Python 3.9.2 documentation,该函数是从 Python 2 模块 urllib
(与 urllib2
相反)移植的。它可能会在未来某个时候被弃用。
因此,最好使用 requests.get(url, params=None, **kwargs)。这是一个 MWE。
import requests
url = 'http://example.com/example.jpg'
response = requests.get(url)
with open(filename, "wb") as f:
f.write(response.content)
参考Downlolad Google’s WebP Images via Take Screenshots with Selenium WebDriver。
答案 13 :(得分:2)
更简单的解决方案可能是(python 3):
import urllib.request
import os
os.chdir("D:\\comic") #your path
i=1;
s="00000000"
while i<1000:
try:
urllib.request.urlretrieve("http://www.gunnerkrigg.com//comics/"+ s[:8-len(str(i))]+ str(i)+".jpg",str(i)+".jpg")
except:
print("not possible" + str(i))
i+=1;
答案 14 :(得分:1)
使用urllib,您可以立即完成此操作。
import urllib.request
opener=urllib.request.build_opener()
opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1941.0 Safari/537.36')]
urllib.request.install_opener(opener)
urllib.request.urlretrieve(URL, "images/0.jpg")
答案 15 :(得分:1)
这个怎么样:
import urllib, os
def from_url( url, filename = None ):
'''Store the url content to filename'''
if not filename:
filename = os.path.basename( os.path.realpath(url) )
req = urllib.request.Request( url )
try:
response = urllib.request.urlopen( req )
except urllib.error.URLError as e:
if hasattr( e, 'reason' ):
print( 'Fail in reaching the server -> ', e.reason )
return False
elif hasattr( e, 'code' ):
print( 'The server couldn\'t fulfill the request -> ', e.code )
return False
else:
with open( filename, 'wb' ) as fo:
fo.write( response.read() )
print( 'Url saved as %s' % filename )
return True
##
def main():
test_url = 'http://cdn.sstatic.net/stackoverflow/img/favicon.ico'
from_url( test_url )
if __name__ == '__main__':
main()
答案 16 :(得分:0)
如果您需要代理支持,可以执行以下操作:
if needProxy == False:
returnCode, urlReturnResponse = urllib.urlretrieve( myUrl, fullJpegPathAndName )
else:
proxy_support = urllib2.ProxyHandler({"https":myHttpProxyAddress})
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
urlReader = urllib2.urlopen( myUrl ).read()
with open( fullJpegPathAndName, "w" ) as f:
f.write( urlReader )
答案 17 :(得分:0)
另一种方法是通过fastai库。这对我来说就像是一种魅力。我正使用SSL: CERTIFICATE_VERIFY_FAILED Error
面对urlretrieve
,所以我尝试了一下。
url = 'https://www.linkdoesntexist.com/lennon.jpg'
fastai.core.download_url(url,'image1.jpg', show_progress=False)
答案 18 :(得分:0)
使用请求
import requests
import shutil,os
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
currentDir = os.getcwd()
path = os.path.join(currentDir,'Images')#saving images to Images folder
def ImageDl(url):
attempts = 0
while attempts < 5:#retry 5 times
try:
filename = url.split('/')[-1]
r = requests.get(url,headers=headers,stream=True,timeout=5)
if r.status_code == 200:
with open(os.path.join(path,filename),'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw,f)
print(filename)
break
except Exception as e:
attempts+=1
print(e)
if __name__ == '__main__':
ImageDl(url)