在python中实现基本的队列/线程进程

时间:2010-07-28 20:11:24

标签: python multithreading queue parallel-processing simultaneous

寻找一些眼球来验证下面的psuedo python块是否有意义。我想要生成一些线程来尽可能快地实现一些inproc函数。我的想法是在主循环中生成线程,因此应用程序将以并行/并发方式同时运行线程

chunk of code
 -get the filenames from a dir
 -write each filename ot a queue
 -spawn a thread for each filename, where each thread 
  waits/reads value/data from the queue
 -the threadParse function then handles the actual processing 
  based on the file that's included via the "execfile" function...


# System modules
from Queue import Queue
from threading import Thread
import time

# Local modules
#import feedparser

# Set up some global variables
appqueue = Queue()

# more than the app will need
# this matches the number of files that will ever be in the 
# urldir
#
num_fetch_threads = 200


def threadParse(q)
  #decompose the packet to get the various elements
  line = q.get()
  college,level,packet=decompose (line)

  #build name of included file
  fname=college+"_"+level+"_Parse.py"
  execfile(fname)
  q.task_done()


#setup the master loop
while True
  time.sleep(2)
  # get the files from the dir
  # setup threads
  filelist="ls /urldir"
  if filelist
    foreach file_ in filelist:
        worker = Thread(target=threadParse, args=(appqueue,))
        worker.start()

    # again, get the files from the dir
    #setup the queue
    filelist="ls /urldir"
    foreach file_ in filelist:
       #stuff the filename in the queue
       appqueue.put(file_)


    # Now wait for the queue to be empty, indicating that we have
    # processed all of the downloads.

  #don't care about this part

  #print '*** Main thread waiting'
  #appqueue.join()
  #print '*** Done'

感谢/评论/指示......

感谢

1 个答案:

答案 0 :(得分:0)

如果我理解这一点:你产生了很多线程来更快地完成任务。

仅当在不保存GIL的情况下完成每个线程中完成的作业的主要部分时,这才有效。因此,如果有大量的数据来自网络,磁盘或类似的东西,那么这可能是一个好主意。 如果每个任务都使用了大量的CPU,那么这将在单核1-CPU机器上运行,您也可以按顺序执行它们。

我应该补充一点,我写的内容对于CPython来说是正确的,但不一定适用于Jython / IronPython。 另外,我应该补充一点,如果你需要使用更多的CPU /核心,那么multiprocessing模块可能会有所帮助。