如何使用Python 3.7下载多个文件

时间:2019-07-30 11:03:56

标签: python python-3.x http

一个初学者的问题-我有一个.txt文件,其中包含要下载的.html文件的列表。该文件的内容如下所示:

http://www.example.com/file1.html
http://www.example.com/file2.html
http://www.example.com/file3.html

我可以使用下面的代码让Python下载单个文件,但我希望它从.txt文件中读取每个URL并下载每个.html文件。

import urllib.request
url = 'http://www.example.com/file1.html'
urllib.request.urlretrieve(url, '/users/user/Downloads/file1.html')

是否有一种简单的方法?

3 个答案:

答案 0 :(得分:1)

with open('file.txt') as f:
   for line in f:
      url = line
      path = 'your path'+url.split('/', -1)[-1]
      urllib.request.urlretrieve(url, path.rstrip('\n'))

答案 1 :(得分:1)

首先,您必须先读取.txt文件,然后才能进行迭代。然后,您可以使用For循环逐个浏览URL链接:

import os

urls = open('pages.txt', 'r')
for i, url in enumerate(urls):
    path = '/users/user/Downloads/{}'.format(os.path.basename(url)
    urllib.request.urlretrieve(url, path)

答案 2 :(得分:0)

您可以使用ThreadPool或ProcessingPool进行并发,例如this tutorial

import requests
from multiprocessing.pool import ThreadPool

def download_url(url):
  print("downloading: ",url)
  # assumes that the last segment after the / represents the file name
  # if url is abc/xyz/file.txt, the file name will be file.txt
  file_name_start_pos = url.rfind("/") + 1
  file_name = url[file_name_start_pos:]

  r = requests.get(url, stream=True)
  if r.status_code == requests.codes.ok:
    with open(file_name, 'wb') as f:
      for data in r:
        f.write(data)
  return url


urls = ["https://jsonplaceholder.typicode.com/posts",
        "https://jsonplaceholder.typicode.com/comments",
        "https://jsonplaceholder.typicode.com/photos",
        "https://jsonplaceholder.typicode.com/todos",
        "https://jsonplaceholder.typicode.com/albums"
        ]

# Run 5 multiple threads. Each call will take the next element in urls list
results = ThreadPool(5).imap_unordered(download_url, urls)
for r in results:
    print(r)