在不等待加入的情况下杀死线程

时间:2019-01-31 16:21:29

标签: python multithreading

我想杀死python中的线程。该线程可以在阻塞操作中运行,而join不能终止它。

与此类似:

from threading import Thread
import time

def block():
    while True:
        print("running")
        time.sleep(1)


if __name__ == "__main__":
        thread = Thread(target = block)
        thread.start()
        #kill thread
        #do other stuff

我的问题是,真正的阻塞操作位于另一个模块中,而该模块不是我提供的,因此没有地方可以中断运行变量。

3 个答案:

答案 0 :(得分:1)

如果将线程设置为守护程序,则在退出主进程时该线程将被杀死:

from threading import Thread
import time

def block():
    while True:
        print("running")
        time.sleep(1)


if __name__ == "__main__":
        thread = Thread(target = block, daemon = True)
        thread.start()
        sys.exit(0)

否则,只需设置一个标志,我就使用一个错误的例子(您应该使用一些同步功能,而不仅仅是一个普通变量):

from threading import Thread
import time
RUNNING = True
def block():
    global RUNNING
    while RUNNING:
        print("running")
        time.sleep(1)


if __name__ == "__main__":
        thread = Thread(target = block, daemon = True)
        thread.start()
        RUNNING = False # thread will stop, not killed until next loop iteration
        .... continue your stuff here

答案 1 :(得分:0)

使用running变量:

from threading import Thread
import time

running = True

def block():
    global running
    while running:
        print("running")
        time.sleep(1)


if __name__ == "__main__":
        thread = Thread(target = block)
        thread.start()

        running = False

        # do other stuff

我希望将其全部包装在一个类中,但这应该可以工作(尽管未经测试)。

答案 2 :(得分:0)

编辑

有一种方法可以在单独的线程中异步引发异常,该异常可能被try: except:块捕获,但这是一个肮脏的hack:https://gist.github.com/liuw/2407154

原始帖子

“我想杀死python中的线程。”你不能。仅当父进程中不再运行非守护线程时,才将线程作为守护程序杀死。可以很好地要求任何线程使用标准的线程间通信方法来终止自身,但是您声明您没有任何机会中断要杀死的功能。这样就离开了进程。

进程具有更多的开销,并且往返之间传递数据更加困难,但是它们确实支持通过发送SIGTERM或SIGKILL杀死它们。

from multiprocessing import Process, Queue
from time import sleep

def workfunction(*args, **kwargs): #any arguments you send to a child process must be picklable by python's pickle module
    sleep(args[0]) #really long computation you might want to kill
    return 'results' #anything you want to get back from a child process must be picklable by python's pickle module


class daemon_worker(Process):
    def __init__(self, target_func, *args, **kwargs):
        self.return_queue = Queue()
        self.target_func = target_func
        self.args = args
        self.kwargs = kwargs
        super().__init__(daemon=True)
        self.start()

    def run(self): #called by self.start()
        self.return_queue.put(self.target_func(*self.args, **self.kwargs))


    def get_result(self): #raises queue.Empty if no result is ready
        return self.return_queue.get()


if __name__=='__main__':
    #start some work that takes 1 sec:
    worker1 = daemon_worker(workfunction, 1)
    worker1.join(3) #wait up to 3 sec for the worker to complete
    if not worker1.is_alive(): #if we didn't hit 3 sec timeout
        print('worker1 got: {}'.format(worker1.get_result()))
    else:
        print('worker1 still running')
        worker1.terminate()
        print('killing worker1')
        sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
        print('worker1 is alive: {}'.format(worker1.is_alive()))

    #start some work that takes 100 sec:
    worker2 = daemon_worker(workfunction, 100)
    worker2.join(3) #wait up to 3 sec for the worker to complete
    if not worker2.is_alive(): #if we didn't hit 3 sec timeout
        print('worker2 got: {}'.format(worker2.get_result()))
    else:
        print('worker2 still running')
        worker2.terminate()
        print('killing worker2')
        sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
        print('worker2 is alive: {}'.format(worker2.is_alive())