我正在尝试通过使用Python 3存储在.txt文件中的URL下载图像,并且在尝试在某些网站上执行此操作时出现错误。这是我得到的错误:
File "C:/Scripts/ImageScraper/ImageScraper.py", line 14, in <module>
dl()
File "C:/Scripts/ImageScraper/ImageScraper.py", line 10, in dl
urlretrieve(URL, IMAGE)
File "C:\Python34\lib\urllib\request.py", line 186, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Python34\lib\urllib\request.py", line 161, in urlopen
return opener.open(url, data, timeout)
File "C:\Python34\lib\urllib\request.py", line 469, in open
response = meth(req, response)
File "C:\Python34\lib\urllib\request.py", line 579, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python34\lib\urllib\request.py", line 507, in error
return self._call_chain(*args)
File "C:\Python34\lib\urllib\request.py", line 441, in _call_chain
result = func(*args)
File "C:\Python34\lib\urllib\request.py", line 587, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
使用此代码:
from urllib.request import urlretrieve
def dl():
with open('links.txt', 'r') as input_file:
for line in input_file:
URL = line
IMAGE = URL.rsplit('/',1)[1]
urlretrieve(URL, IMAGE)
if __name__ == '__main__':
dl()
我假设它是因为他们不允许'机器人'访问他们的网站,但通过一些研究我发现有一种方法,至少在使用urlopen时,但我无法将解决方法应用于我的代码当我使用urlretrieve时。是否有可能让它发挥作用?
答案 0 :(得分:1)
我认为该错误是一个实际的HTTP错误:403,称禁止访问该URL。您可能希望在访问URL之前尝试打印该URL,并尝试通过浏览器访问该URL。你还应该得到一个禁止的错误(403)。详细了解http_status_codes,特别是403 forbidden