使用线程获取文件中每个单词的计数

时间:2018-02-21 05:38:55

标签: python multithreading parallel-processing

我目前正在尝试使用线程以并行方式获取文件中每个单词的计数,但是在当前我的代码变得更慢,甚至只添加一个额外的线程。我觉得它应该随着线程的增加而减少,直到我的cpu瓶颈然后我的时间应该再慢一点。我不明白为什么它不是平行的。

这是代码

import thread
import threading
import time
import sys
class CountWords(threading.Thread):
    def __init__(self,lock,tuple):
        threading.Thread.__init__(self)
        self.lock = lock
        self.list = tuple[1]
        self.dit = tuple[0]
    def run(self):
        for word in self.list:
            #self.lock.acquire()
            if word in self.dit.keys():
                self.dit[word] = self.dit[word] + 1
            else:
                self.dit[word] = 1
            #self.lock.release()


def getWordsFromFile(numThreads, fileName):
    lists = []
    for i in range(int(numThreads)):
        k = []
        lists.append(k)
    print len(lists)
    file = open(fileName, "r")  # uses .read().splitlines() instead of readLines() to get rid of "\n"s
    all_words = map(lambda l: l.split(" "), file.read().splitlines()) 
    all_words = make1d(all_words)
    cur = 0
    for word in all_words:
        lists[cur].append(word)
        if cur == len(lists) - 1:
            cur = 0
        else:
            cur = cur + 1
    return lists

def make1d(list):
    newList = []
    for x in list:
        newList += x
    return newList

def printDict(dit):# prints the dictionary nicely
    for key in sorted(dit.keys()):
        print key, ":", dit[key]  



if __name__=="__main__":
    print "Starting now"
    start = int(round(time.time() * 1000))
    lock=threading.Lock()
    ditList=[]
    threadList = []
    args = sys.argv
    numThreads = args[1]
    fileName = "" + args[2]
    for i in range(int(numThreads)):
        ditList.append({})
    wordLists = getWordsFromFile(numThreads, fileName)
    zipped = zip(ditList,wordLists)
    print "got words from file"
    for tuple in zipped:
        threadList.append(CountWords(lock,tuple))
    for t in threadList:
        t.start()
    for t in threadList:
        if t.isAlive():
            t.join()
    fin = int(round(time.time() * 1000)) - start
    print "with", numThreads, "threads", "counting the words took :", fin, "ms"
    #printDict(dit)

2 个答案:

答案 0 :(得分:1)

你可以使用itertools来计算文件中的单词。下面是简单的示例code.explore itertools.groupby并根据你的逻辑修改代码。

import itertools

tweets = ["I am a cat", "cat", "Who is a good cat"]

words = sorted(list(itertools.chain.from_iterable(x.split() for x in tweets)))
count = {k:len(list(v)) for k,v in itertools.groupby(words)}

答案 1 :(得分:0)

由于GIL(What is a global interpreter lock (GIL)?),Python无法并行运行线程(利用多个核心)。

将线程添加到此任务只会增加代码的开销,使其变慢。

我可以说两种情况你可以使用线程:

  • 当你有很多I / O 时:线程可以让你的代码同时运行(而不是并行https://blog.golang.org/concurrency-is-not-parallelism),因此你的代码可以在等待响应获取时做很多事情加快速度。
  • 您不希望阻止代码的大量计算:您使用线程与其他任务同时运行此计算。

如果您想利用所有核心,则需要使用多处理模块(https://docs.python.org/3.6/library/multiprocessing.html)。