如何将参数传递给线程?

时间:2014-08-01 03:25:24

标签: python multithreading

我想使用线程模块为范围(1,100)中的每个元素添加5, 看哪个rusult在哪个线程。 我完成了几乎所有的代码,但是如何将参数传递给threading.Thread?

import threading,queue
x=range(1,100)
y=queue.Queue()
for i in x:
    y.put(i)

def myadd(x):
    print(x+5)


for i in range(5):
    print(threading.Thread.getName())
    threading.Thread(target=myadd,args=x).start() #it is wrong here
    y.join()

想想dano,现在好了,为了以交互方式运行,我把它重写为:

方法1:以交互方式运行。

from concurrent.futures import ThreadPoolExecutor
import threading
x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))

def run():
    t = ThreadPoolExecutor(max_workers=5)
    t.map(myadd, x)
    t.shutdown()
run()

methdo 2:

from concurrent.futures import ThreadPoolExecutor
import threading
x = range(1, 100)
def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))
def run():
    t = ThreadPoolExecutor(max_workers=5)
    t.map(myadd, x)
    t.shutdown()
if __name__=="__main__":
    run()

如果要将更多args传递给ThreadPoolExecutor怎么办? 我想用多处理模块计算1 + 3,2 + 4,3 + 45直到100 + 102。 那么多处理模块20 + 1,20 + 2,20 + 3到20 + 100呢?

from multiprocessing.pool import ThreadPool
do = ThreadPool(5)
def myadd(x,y):
    print(x+y)

do.apply(myadd,range(3,102),range(1,100))

如何解决?

3 个答案:

答案 0 :(得分:1)

在这里,您需要传递一个元组而不是使用单个元素。

为了制作元组,代码就是。

dRecieved = connFile.readline();
processThread = threading.Thread(target=processLine, args=(dRecieved,)); 
processThread.start();

请参阅here了解更多解释

答案 1 :(得分:0)

看起来您正在尝试手动创建线程池,因此使用五个线程来累加所有100个结果。如果是这种情况,我建议您使用multiprocessing.pool.ThreadPool

from multiprocessing.pool import ThreadPool
import threading
import queue

x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(
               threading.current_thread(), x+5))

t = ThreadPool(5)
t.map(myadd, x)
t.close()
t.join()

如果您使用的是Python 3.x,则可以改为使用concurrent.futures.ThreadPoolExecutor

from concurrent.futures import ThreadPoolExecutor
import threading

x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))

t = ThreadPoolExecutor(max_workers=5)
t.map(myadd, x)
t.shutdown()

我认为原始代码存在两个问题。首先,您需要将元组传递给args关键字参数,而不是单个元素:

threading.Thread(target=myadd,args=(x,))

但是,您还试图将range返回的整个列表(或range(1,100)对象(如果使用Python 3.x)传递给myadd,这实际上并不是真的你想做什么。还不清楚你在使用队列的是什么。也许你打算把它传递给myadd

最后一点:Python使用全局解释器锁(GIL),它可以防止多个线程一次使用CPU。这意味着在线程中执行CPU绑定操作(如添加)不会在Python中提供性能提升,因为一次只能运行其中一个线程。因此,在Python中,最好使用多个进程来并行化CPU绑定操作。您可以通过将第一个示例中的ThreadPool替换为from mulitprocessing import Pool,使上述代码使用多个流程。在第二个示例中,您将使用ProcessPoolExecutor而不是ThreadPoolExecutor。您可能还希望将threading.current_thread()替换为os.getpid()

修改

以下是如何处理有两种不同的args传递的情况:

from multiprocessing.pool import ThreadPool

def myadd(x,y):
    print(x+y)

def do_myadd(x_and_y):
    return myadd(*x_and_y)

do = ThreadPool(5)    
do.map(do_myadd, zip(range(3, 102), range(1, 100)))

我们使用zip创建一个列表,我们将范围中的每个变量组合在一起:

[(3, 1), (4, 2), (5, 3), ...]

我们使用map为该列表中的每个元组调用do_myadddo_myadd使用元组扩展(*x_and_y),将元组扩展为两个单独的参数,传递给myadd

答案 2 :(得分:0)

自:

import threading,queue
x=range(1,100)
y=queue.Queue()
for i in x:
    y.put(i)

def myadd(x):
    print(x+5)


for i in range(5):
    print(threading.Thread.getName())
    threading.Thread(target=myadd,args=x).start() #it is wrong here
    y.join()

要:

import threading
import queue

# So print() in various threads doesn't garble text; 
# I hear it is better to use RLock() instead of Lock().
screen_lock = threading.RLock() 

# I think range() is an iterator or generator. Thread safe?
argument1 = range(1, 100)
argument2 = [100,] * 100 # will add 100 to each item in argument1

# I believe this creates a tuple (immutable). 
# If it were a mutable object then perhaps it wouldn't be thread safe.
data = zip(argument1, argument2)

# object where multiple threads can grab data while avoiding deadlocks.
q = queue.Queue()

# Fill the thread-safe queue with mock data
for item in data:
    q.put(item)

# It could be wiser to use one queue for each inbound data stream.
# For example one queue for file reads, one queue for console input,
# one queue for each network socket. Remembering that rates of 
# reading files and console input and receiving network traffic all
# differ and you don't want one I/O operation to block another.
# inbound_file_data = queue.Queue()
# inbound_console_data = queue.Queue() # etc.

# This function is a thread target
def myadd(thread_name, a_queue):

    # This thread-targetted function blocks only within each thread;
    # at a_queue.get() and at a_queue.put() (if queue is full).
    #
    # Each thread targetting this function has its own copy of
    # this functions local() namespace. So each thread will 
    # pause when the queue is empty, on queue.get(), or when 
    # the queue is full, on queue.put(). With one queue, this 
    # means all threads will block at the same time, when the 
    # single queue is full or when the single queue is empty 
    # unless we check for the number of remaining items in the
    # queue before we do a queue.get() and if none remain in the 
    # queue just exit this function. This presumes the data is 
    # not a continues and slow stream like a network connection 
    # or a rotating log file but limited like a closed file.

    # Let each thread continue to read from the global 
    # queue until it is empty. 
    # 
    # This is a bad use-case for using threading. 
    # 
    # If each thread had a separate queue it would be 
    # a better use-case. You don't want one slow stream of 
    # data blocking the processing of a fast stream of data.
    #
    # For a single stream of data it is likely better just not 
    # to use threads. However here is a single "global" queue 
    # example...

    # presumes a_queue starts off not empty
    while a_queue.qsize():
        arg1, arg2 = a_queue.get() # blocking call

        # prevent console/tty text garble
        if screen_lock.acquire():
            print('{}: {}'.format(thread_name, arg1 + arg2))
            print('{}: {}'.format(thread_name, arg1 + 5))
            print()
            screen_lock.release()
        else:
            # print anyway if lock fails to acquire
            print('{}: {}'.format(thread_name, arg1 + arg2))
            print('{}: {}'.format(thread_name, arg1 + 5))
            print()

        # allows .join() to keep track of when queue finished
        a_queue.task_done()


# create threads and pass in thread name and queue to thread-target function
threads = []
for i in range(5):
    thread_name = 'Thread-{}'.format(i)
    thread = threading.Thread(
        name=thread_name, 
        target=myadd, 
        args=(thread_name, q))

    # Recommended:
    # queues = [queue.Queue() for index in range(len(threads))] # put at top of file 
    # thread = threading.Thread(
    #   target=myadd, 
    #   name=thread_name, 
    #   args=(thread_name, queues[i],))
    threads.append(thread)

# some applications should start threads after all threads are created.
for thread in threads:
   thread.start()

# Each thread will pull items off the queue. Because the while loop in 
# myadd() ends with the queue.qsize() == 0 each thread will terminate 
# when there is nothing left in the queue.