Python:在产生的进程之间共享时间锁定,以便它们之间有延迟

时间:2019-06-18 21:06:56

标签: python multiprocessing locking

我正在尝试打印此列表中的ID,使它们在过程的开始和结束之间有延迟,并且在queue.get(我使用共享共享锁的threading.Timer实现)之间有延迟)。我遇到的问题是,虽然我当前的计时器设置允许我锁定进程,以便一个进程从队列中获取所有其他进程无法启动的记录后还有2秒的时间,但我的程序仅关闭2在程序运行结束时的4个进程中。我该如何解决这个问题,以便所有进程关闭并且程序可以退出。

我下面的输出显示了这一点,因为我希望再有2条“关闭工作人员”通知:

Process started
Process started
Process started
Process started
begin 1 : 1560891818.0307562
begin 2 : 1560891820.0343137
begin 3 : 1560891822.0381632
end 2 : 3.0021514892578125
end 1 : 6.004615068435669
begin 4 : 1560891824.0439706
begin 5 : 1560891826.0481522
end 4 : 3.004107713699341
end 3 : 6.005637168884277
begin 6 : 1560891828.0511773
begin 7 : 1560891830.0557532
end 6 : 3.0032966136932373
end 5 : 6.006829261779785
begin 8 : 1560891832.056265
begin 9 : 1560891834.0593572
end 8 : 3.011284112930298
end 7 : 6.005618333816528
begin 10 : 1560891836.0627353
end 10 : 3.0014095306396484
worker closed
end 9 : 6.000675916671753
worker closed
import multiprocessing
from time import sleep, time
import threading

class TEMP:

    lock = multiprocessing.Lock()

    id_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    queue = multiprocessing.Queue(10)

    DELAY = 2

    def mp_worker(self, queue, lock):

        while queue.qsize() > 0:

            lock.acquire()
            # Release the lock after a delay
            threading.Timer(self.DELAY,lock.release).start()

            record = queue.get()
            start_time = time()
            print("begin {0} : {1}".format(record, start_time))
            if (record % 2 == 0):
                sleep(3)
            else:
                sleep(6)
            print("end {0} : {1}".format(record, time() - start_time))

            threading.Timer.join()

        print("worker closed")

    def mp_handler(self):

        # Spawn two processes, assigning the method to be executed
        # and the input arguments (the queue)
        processes = [multiprocessing.Process(target=self.mp_worker, args=([self.queue, self.lock])) \
            for _ in range(4)]

        for process in processes:
            process.start()
            print('Process started')


        for process in processes:
            process.join()

    def start_mp(self):

        for id in self.id_list:
            self.queue.put(id)

        self.mp_handler()

if __name__ == '__main__':
    temp = TEMP()
    temp.start_mp()

1 个答案:

答案 0 :(得分:0)

我实际上解决了这个问题。我的代码未加入的主要原因是因为我的代码正在检查队列是否为空,等待延迟,然后尝试从队列中获取内容。这意味着,在程序结束时,虽然队列已为空,并且4个进程中的2个成功同时完成,但是其余2个进程处于延迟状态。当此延迟结束后,他们尝试从队列中获取某些内容,但是由于队列为空,因此它们阻止了进程代码的其余部分运行,这意味着他们永远无法加入备份。

我通过在进程尝试从队列中获取内容之前检查队列是否为空来解决此问题。我的固定worker功能如下:

def mp_worker(self, queue, lock):

    while not queue.empty():

        print(mp.current_process().name)
        lock.acquire()
        # Release the lock after a delay
        timer = Timer(self.DELAY, lock.release)
        timer.start()

        if not queue.empty():
            record = queue.get(False)

            start_time = time()
            print("begin {0} : {1}".format(record, start_time))
            if (record % 2 == 0):
                sleep(3)
            else:
                sleep(6)
            print("end {0} : {1}".format(record, time() - start_time))

    print("{0} closed".format(mp.current_process().name))