我正尝试在python3中创建一个带有线程的简单程序,并通过使用4个或更多线程同时下载4张图像并在PC的downloads文件夹中下载所述图像,来排队从URL链接中同时下载图像同时通过在线程之间共享信息来避免重复。 我想我可以使用类似URL1 =“ Link1”的东西吗? 这是一些链接示例。
“ https://unab-dw2018.s3.amazonaws.com/ldp2019/1.jpeg”
“ https://unab-dw2018.s3.amazonaws.com/ldp2019/2.jpeg”
但是我不明白如何在队列中使用线程,我不知道该怎么做。
我尝试搜索一些页面,该页面可以解释如何使用带有队列的线程进行并发下载,我只找到了线程的链接。
这是一个部分起作用的代码。 我需要的是程序询问您要多少个线程,然后下载图像,直到达到图像20,但是如果输入5,则在代码上将仅下载5个图像,依此类推。关键是,如果我放5,它将首先下载5个图像,然后再下载5,依此类推,直到20。如果它的4个图像,则是4、4、4、4、4。如果它的6,则它将去6, 6,6,然后下载其余的2。 我必须以某种方式在代码上实现队列,但几天前我只是学习线程,而我却迷失在如何混合使用线程和队列的过程中。
import threading
import urllib.request
import queue # i need to use this somehow
def worker(cont):
print("The worker is ON",cont)
image_download = "URL"+str(cont)+".jpeg"
download = urllib.request.urlopen(image_download)
file_save = open("Image "+str(cont)+".jpeg", "wb")
file_save.write(download.read())
file_save.close()
return cont+1
threads = []
q_threads = int(input("Choose input amount of threads between 4 and 20"))
for i in range(0, q_threads):
h = threading.Thread(target=worker, args=(i+1, int))
threads.append(h)
for i in range(0, q_threads):
threads[i].start()
答案 0 :(得分:1)
我从用于执行多线程PSO的一些代码中改编了以下内容
import threading
import queue
if __name__ == "__main__":
picture_queue = queue.Queue(maxsize=0)
picture_threads = []
picture_urls = ["string.com","string2.com"]
# create and start the threads
for url in picture_urls:
picture_threads.append(picture_getter(url, picture_queue))
picture_threads[i].start()
# wait for threads to finish
for picture_thread in picture_threads:
picture_thread.join()
# get the results
picture_list = []
while not picture_queue.empty():
picture_list.append(picture_queue.get())
class picture_getter(threading.Thread):
def __init__(self, url, picture_queue):
self.url = url
self.picture_queue = picture_queue
super(picture_getter, self).__init__()
def run(self):
print("Starting download on " + str(self.url))
self._get_picture()
def _get_picture(self):
# --- get your picture --- #
self.picture_queue.put(picture)
请注意,stackoverflow上的人们喜欢在提供解决方案之前先看看您尝试过的内容。但是无论如何我都有这段代码。欢迎搭乘新手伙伴!
我要添加的一件事是,这不能通过在线程之间共享信息来避免重复。避免重复,因为每个线程都知道要下载什么。如果您的文件名已按照您的问题编号,那么这应该没问题,因为您可以轻松地建立这些文件的列表。
更新代码以解决对Treyons原始帖子的修改
import threading
import urllib.request
import queue
import time
class picture_getter(threading.Thread):
def __init__(self, url, file_name, picture_queue):
self.url = url
self.file_name = file_name
self.picture_queue = picture_queue
super(picture_getter, self).__init__()
def run(self):
print("Starting download on " + str(self.url))
self._get_picture()
def _get_picture(self):
print("{}: Simulating delay".format(self.file_name))
time.sleep(1)
# download and save image
download = urllib.request.urlopen(self.url)
file_save = open("Image " + self.file_name, "wb")
file_save.write(download.read())
file_save.close()
self.picture_queue.put("Image " + self.file_name)
def remainder_or_max_threads(num_pictures, num_threads, iterations):
# remaining pictures
remainder = num_pictures - (num_threads * iterations)
# if there are equal or more pictures remaining than max threads
# return max threads, otherwise remaining number of pictures
if remainder >= num_threads:
return max_threads
else:
return remainder
if __name__ == "__main__":
# store the response from the threads
picture_queue = queue.Queue(maxsize=0)
picture_threads = []
num_pictures = 20
url_prefix = "https://unab-dw2018.s3.amazonaws.com/ldp2019/"
picture_names = ["{}.jpeg".format(i+1) for i in range(num_pictures)]
max_threads = int(input("Choose input amount of threads between 4 and 20: "))
iterations = 0
# during the majority of runtime iterations * max threads is
# the number of pictures that have been downloaded
# when it exceeds num_pictures all pictures have been downloaded
while iterations * max_threads < num_pictures:
# this returns max_threads if there are max_threads or more pictures left to download
# else it will return the number of remaining pictures
threads = remainder_or_max_threads(num_pictures, max_threads, iterations)
# loop through the next section of pictures, create and start their threads
for name, i in zip(picture_names[iterations * max_threads:], range(threads)):
picture_threads.append(picture_getter(url_prefix + name, name, picture_queue))
picture_threads[i + iterations * max_threads].start()
# wait for threads to finish
for picture_thread in picture_threads:
picture_thread.join()
# increment the iterations
iterations += 1
# get the results
picture_list = []
while not picture_queue.empty():
picture_list.append(picture_queue.get())
print("Successfully downloaded")
print(picture_list)