我想在Python中使用多线程和队列(以限制线程数)迭代字典中的字典(模拟目录或网站的结构)。我创建了mainDict来模拟这个
mainDict = {"Layer1": {"Layer11": 1, "Layer12": 1, "Layer13": 1, "Layer14": 1, "Layer15": 1," Layer16": 1},
"Layer2": {"Layer21": 2, "Layer22": 2, "Layer23": 2, "Layer24": 2, "Layer25": 2, "Layer26": 2},
"Layer3": {"Layer31": 4, "Layer32": 4, "Layer33": 4, "Layer34": 4, "Layer35": 4, "Layer36": 4},
"Layer4": {"Layer41": 8, "Layer42": 8, "Layer43": 8, "Layer44": 8, "Layer45": 8, "Layer46": 8},
"Layer5": {"Layer51": 16, "Layer52": 16, "Layer53": 16, "Layer54": 16, "Layer55": 16, "Layer56": 16},
"Layer6": {"Layer61": 32, "Layer62": 32, "Layer63": 32, "Layer64": 32, "Layer65": 32, "Layer66": 32}}
和一个Crawler类,用于为mainDict的每个第一个子字典实例化一个爬虫。
我的想法是我想创建2个线程(一次有限数量的线程/爬虫以降低CPU使用率),这些线程可以爬到Layer(i)(i = 1..6)。每个线程将爬行直到它到达“树”的叶子而不是移动到下一个字典(例如,爬行器0将通过Layer1,爬行器1将在完成第3层...之后通过Layer2。)。
class Crawler:
def __init__(self, rootDict, number_of_delay, crawler):
self.crawler = crawler
self.rootDict = rootDict
self.number_of_delay = number_of_delay
def crawlAllLeaves(self, myDict):
for k, v in myDict.items():
if isinstance(v, dict):
print("Crawler {} is crawling {}".format(self.crawler, k))
self.crawlAllLeaves(v)
else:
print("Crawler {} reached the value {} for key {}".format(self.crawler, v, k))
time.sleep(self.number_of_delay + v)
def someAuxFunc():
#to simulate some loading time
time.sleep(2)
def createWorker(q, delayNumber, crawler):
tc = Crawler(mainDict[q.get()], delayNumber, crawler)
tc.crawlAllLeaves(tc.rootDict)
def threader(q, delayNumber, crawler):
while True:
print("crawler {}: has gotten the url {}".format(crawler, q.get()))
createWorker(q, delayNumber, crawler)
print("crawler {}: has finished the url {}".format(crawler, q.get()))
q.task_done()
q = Queue()
number_of_threads = 2
delayNumber = 2
for thread in range(number_of_threads):
th = threading.Thread(target=threader, args=(q, delayNumber, thread,))
th.setDaemon(True)
th.start()
for key, value in mainDict.items():
someAuxFunc()
print("QUEING {}".format(key))
q.put(key)
q.join()
我有两个问题:
你能帮我解决这个问题,因为我想学习Python和线程,我不知道我做错了什么?
答案 0 :(得分:1)
您的问题在于处理队列,它解释了您的两个问题。您继续从队列中读取而不是使用您从那里实际收到的值。看看这个(固定的)代码:
def createWorker(bar, delayNumber, crawler):
tc = Crawler(mainDict[bar], delayNumber, crawler)
tc.crawlAllLeaves(tc.rootDict)
def threader(q, delayNumber, crawler):
while True:
foo = q.get()
print("crawler {}: has gotten the url {}".format(crawler, foo))
createWorker(foo, delayNumber, crawler)
print("crawler {}: has finished the url {}".format(crawler, foo))
q.task_done()
在threader
中,我们现在将队列读取一次变量,然后将此变量传递给createWorker
。在createWorker
中,您使用此值而不是另一个值。
您的原始代码最初从第一个print语句中的队列中获取值。它打印该值然后丢弃。然后你调用createWorker
,在那里你从队列中接收下一个值并开始工作。最后,第二个print语句从队列中获取另一个值并打印出来。 print语句中显示的值均未实际传递给createWorker
。
Queue.get()
会阻止。当你为每一个处理过的时候得到三个值时,你的结果就是它的结果,但绝对不是你想要的。您的代码会在最终q.join()
中阻止,因为您已使用get()
三次从队列中获取值,但仅使用task_done
一次。因此,您的联接阻止,因为它假定仍有任务正在进行中。