请查看python代码以提高其性能

时间:2010-09-27 04:14:31

标签: python performance information-retrieval

我正在做一个信息检索任务。我建了一个简单的搜索引擎。 InvertedIndex是一个python字典对象,它被序列化(用python术语腌制)到一个文件。这个文件的大小是InvertedIndex只有6.5MB。

所以,我的代码只是取消了它并搜索它以查询&根据TF-IDF得分对匹配文档进行排名。听起来不是很大吗?

它开始运行30分钟前仍在运行。私人字节&运行我的100line python脚本的pythonw.exe的虚拟大小用法分别为88MB和168MB。

当我尝试使用较小尺寸的索引时,速度很快。是python还是我的代码?为什么这么慢?

stopwords = ['a' , 'a\'s' , 'able' , 'about' , 'above' , 'according' , 'accordingly' , 'across' , 'actually' , 'after' , 'afterwards' , 'again' , 'against' , 'ain\'t' , 'all' , 'allow' , 'allows' , 'almost' , 'alone' , 'along' , 'already' , 'also' , 'although' , 'always' , 'am' , 'among' , 'amongst' , 'an' , 'and' , 'another' , 'any' , 'anybody' , 'anyhow' , 'anyone' , 'anything' , 'anyway' , 'anyways' , 'anywhere' , 'apart' , 'appear' , 'appreciate' , 'appropriate' , 'are' , 'aren\'t' , 'around' , 'as' , 'aside' , 'ask' , 'asking' , 'associated' , 'at' , 'available' , 'away' , 'awfully' , 'b' , 'be' , 'became' , 'because' , 'become' , 'becomes' , 'becoming' , 'been' , 'before' , 'beforehand' , 'behind' , 'being' , 'believe' , 'below' , 'beside' , 'besides' , 'best' , 'better' , 'between' , 'beyond' , 'both' , 'brief' , 'but' , 'by' , 'c' , 'c\'mon' , 'c\'s' , 'came' , 'can' , 'can\'t' , 'cannot' , 'cant' , 'cause' , 'causes' , 'certain' , 'certainly' , 'changes' , 'clearly' , 'co' , 'com' , 'come' , 'comes' , 'concerning' , 'consequently' , 'consider' , 'considering' , 'contain' , 'containing' , 'contains' , 'corresponding' , 'could' , 'couldn\'t' , 'course' , 'currently' , 'd' , 'definitely' , 'described' , 'despite' , 'did' , 'didn\'t' , 'different' , 'do' , 'does' , 'doesn\'t' , 'doing' , 'don\'t' , 'done' , 'down' , 'downwards' , 'during' , 'e' , 'each' , 'edu' , 'eg' , 'eight' , 'either' , 'else' , 'elsewhere' , 'enough' , 'entirely' , 'especially' , 'et' , 'etc' , 'even' , 'ever' , 'every' , 'everybody' , 'everyone' , 'everything' , 'everywhere' , 'ex' , 'exactly' , 'example' , 'except' , 'f' , 'far' , 'few' , 'fifth' , 'first' , 'five' , 'followed' , 'following' , 'follows' , 'for' , 'former' , 'formerly' , 'forth' , 'four' , 'from' , 'further' , 'furthermore' , 'g' , 'get' , 'gets' , 'getting' , 'given' , 'gives' , 'go' , 'goes' , 'going' , 'gone' , 'got' , 'gotten' , 'greetings' , 'h' , 'had' , 'hadn\'t' , 'happens' , 'hardly' , 'has' , 'hasn\'t' , 'have' , 'haven\'t' , 'having' , 'he' , 'he\'s' , 'hello' , 'help' , 'hence' , 'her' , 'here' , 'here\'s' , 'hereafter' , 'hereby' , 'herein' , 'hereupon' , 'hers' , 'herself' , 'hi' , 'him' , 'himself' , 'his' , 'hither' , 'hopefully' , 'how' , 'howbeit' , 'however' , 'i' , 'i\'d' , 'i\'ll' , 'i\'m' , 'i\'ve' , 'ie' , 'if' , 'ignored' , 'immediate' , 'in' , 'inasmuch' , 'inc' , 'indeed' , 'indicate' , 'indicated' , 'indicates' , 'inner' , 'insofar' , 'instead' , 'into' , 'inward' , 'is' , 'isn\'t' , 'it' , 'it\'d' , 'it\'ll' , 'it\'s' , 'its' , 'itself' , 'j' , 'just' , 'k' , 'keep' , 'keeps' , 'kept' , 'know' , 'knows' , 'known' , 'l' , 'last' , 'lately' , 'later' , 'latter' , 'latterly' , 'least' , 'less' , 'lest' , 'let' , 'let\'s' , 'like' , 'liked' , 'likely' , 'little' , 'look' , 'looking' , 'looks' , 'ltd' , 'm' , 'mainly' , 'many' , 'may' , 'maybe' , 'me' , 'mean' , 'meanwhile' , 'merely' , 'might' , 'more' , 'moreover' , 'most' , 'mostly' , 'much' , 'must' , 'my' , 'myself' , 'n' , 'name' , 'namely' , 'nd' , 'near' , 'nearly' , 'necessary' , 'need' , 'needs' , 'neither' , 'never' , 'nevertheless' , 'new' , 'next' , 'nine' , 'no' , 'nobody' , 'non' , 'none' , 'noone' , 'nor' , 'normally' , 'not' , 'nothing' , 'novel' , 'now' , 'nowhere' , 'o' , 'obviously' , 'of' , 'off' , 'often' , 'oh' , 'ok' , 'okay' , 'old' , 'on' , 'once' , 'one' , 'ones' , 'only' , 'onto' , 'or' , 'other' , 'others' , 'otherwise' , 'ought' , 'our' , 'ours' , 'ourselves' , 'out' , 'outside' , 'over' , 'overall' , 'own' , 'p' , 'particular' , 'particularly' , 'per' , 'perhaps' , 'placed' , 'please' , 'plus' , 'possible' , 'presumably' , 'probably' , 'provides' , 'q' , 'que' , 'quite' , 'qv' , 'r' , 'rather' , 'rd' , 're' , 'really' , 'reasonably' , 'regarding' , 'regardless' , 'regards' , 'relatively' , 'respectively' , 'right' , 's' , 'said' , 'same' , 'saw' , 'say' , 'saying' , 'says' , 'second' , 'secondly' , 'see' , 'seeing' , 'seem' , 'seemed' , 'seeming' , 'seems' , 'seen' , 'self' , 'selves' , 'sensible' , 'sent' , 'serious' , 'seriously' , 'seven' , 'several' , 'shall' , 'she' , 'should' , 'shouldn\'t' , 'since' , 'six' , 'so' , 'some' , 'somebody' , 'somehow' , 'someone' , 'something' , 'sometime' , 'sometimes' , 'somewhat' , 'somewhere' , 'soon' , 'sorry' , 'specified' , 'specify' , 'specifying' , 'still' , 'sub' , 'such' , 'sup' , 'sure' , 't' , 't\'s' , 'take' , 'taken' , 'tell' , 'tends' , 'th' , 'than' , 'thank' , 'thanks' , 'thanx' , 'that' , 'that\'s' , 'thats' , 'the' , 'their' , 'theirs' , 'them' , 'themselves' , 'then' , 'thence' , 'there' , 'there\'s' , 'thereafter' , 'thereby' , 'therefore' , 'therein' , 'theres' , 'thereupon' , 'these' , 'they' , 'they\'d' , 'they\'ll' , 'they\'re' , 'they\'ve' , 'think' , 'third' , 'this' , 'thorough' , 'thoroughly' , 'those' , 'though' , 'three' , 'through' , 'throughout' , 'thru' , 'thus' , 'to' , 'together' , 'too' , 'took' , 'toward' , 'towards' , 'tried' , 'tries' , 'truly' , 'try' , 'trying' , 'twice' , 'two' , 'u' , 'un' , 'under' , 'unfortunately' , 'unless' , 'unlikely' , 'until' , 'unto' , 'up' , 'upon' , 'us' , 'use' , 'used' , 'useful' , 'uses' , 'using' , 'usually' , 'uucp' , 'v' , 'value' , 'various' , 'very' , 'via' , 'viz' , 'vs' , 'w' , 'want' , 'wants' , 'was' , 'wasn\'t' , 'way' , 'we' , 'we\'d' , 'we\'ll' , 'we\'re' , 'we\'ve' , 'welcome' , 'well' , 'went' , 'were' , 'weren\'t' , 'what' , 'what\'s' , 'whatever' , 'when' , 'whence' , 'whenever' , 'where' , 'where\'s' , 'whereafter' , 'whereas' , 'whereby' , 'wherein' , 'whereupon' , 'wherever' , 'whether' , 'which' , 'while' , 'whither' , 'who' , 'who\'s' , 'whoever' , 'whole' , 'whom' , 'whose' , 'why' , 'will' , 'willing' , 'wish' , 'with' , 'within' , 'without' , 'won\'t' , 'wonder' , 'would' , 'would' , 'wouldn\'t' , 'x' , 'y' , 'yes' , 'yet' , 'you' , 'you\'d' , 'you\'ll' , 'you\'re' , 'you\'ve' , 'your' , 'yours' , 'yourself' , 'yourselves' , 'z' , 'zero']
import PorterStemmer
import math
import pickle

def TF(term,doc):
    #Term Frequency: No. of times `term` occured in `doc`
    global InvertedIndex
    idx = InvertedIndex[term].index(doc)
    count = 0
    while (idx < len(InvertedIndex[term])) and InvertedIndex[term][idx] == doc:
        count= count+1
        idx = idx+1
    return count

def DF(term):
    #Document Frequency: No. of documents containing `term`
    global InvertedIndex
    return len(set(InvertedIndex[term]))

def avgTF(term, doc):
    global docs
    TFs = []
    for term in docs[doc]:
        TFs.append(TF(term,doc))
    return sum(TFs)/len(TFs)

def maxTF(term, doc):
    global docs
    TFs = []    
    for term in docs[doc]:
        TFs.append(TF(term,doc))
    return max(TFs)

def getValues4Term(term, doc):
    TermFrequency = {}
    TermFrequency['natural'] = TF(term,doc)
    TermFrequency['log'] = 1+math.log( TF(term,doc) )
    TermFrequency['aug'] = 0.5+float(0.5*TF(term,doc)/maxTF(term,doc))
    TermFrequency['bool'] = 1 if TF(term,doc)>0 else 0
    TermFrequency['log_avg'] = float(1+math.log( TF(term,doc) ))/(1+math.log( avgTF(term,doc) ))

    DocumentFrequency = {}
    DocumentFrequency['no'] = 1
    DocumentFrequency['idf'] = math.log( len(docs)/DF(term) )
    DocumentFrequency['probIDF'] = max( [0, math.log( float(len(docs)-DF(term))/DF(term) )] )
    return [TermFrequency, DocumentFrequency]

def Cosine(resultDocVector, qVector, doc):
    #`doc` parameter is the document number corresponding to resultDocVector
    global qterms,docs
    # Defining Cosine similarity : cos(a) = A.B/|A||B|

    dotProduct = 0
    commonTerms_q_d = set(qterms).intersection(docs[doc]) #commonTerms in both query & document
    for cmnTerm in commonTerms_q_d:
       dotProduct =  dotProduct + resultDocVector[docs[doc].index(cmnTerm)] * qVector[qterms.index(cmnTerm)]

    resultSquares = []
    for k in resultDocVector:
        resultSquares.append(k*k)

    qSquares = []
    for k in qVector:
        qSquares.append(k*k)

    denominator = math.sqrt(sum(resultSquares)) * math.sqrt(sum(qSquares))
    return dotProduct/denominator

def load():
    #load index from a file
    global InvertedIndex, docIDs, docs
    PICKLE_InvertedIndex_FILE = open("InvertedIndex.db", 'rb')
    InvertedIndex = pickle.load(PICKLE_InvertedIndex_FILE)
    PICKLE_InvertedIndex_FILE.close()

    PICKLE_docIDs_FILE = open("docIDs.db", 'rb')
    docIDs = pickle.load(PICKLE_docIDs_FILE)
    PICKLE_docIDs_FILE.close()

    PICKLE_docs_FILE = open("docs.db", 'rb')
    docs = pickle.load(PICKLE_docs_FILE)
    PICKLE_docs_FILE.close()
########################
docs = []
docIDs = []
InvertedIndex = {}
load()

stemmer = PorterStemmer.PorterStemmer()
#<getting results for a query
query = 'Antarctica exploration'
qwords = query.strip().split()
qterms = []
qterms1 = []
for qword in qwords:
    qword = qword.lower()
    if qword in stopwords:
        continue
    qterm = stemmer.stem(qword,0,len(qword)-1) 
    qterms1.append(qterm)
qterms = list(set(qterms1))


#getting posting lists for each qterms & merging them
prev = set()
i = 0
for qterm in qterms:
    if InvertedIndex.has_key(qterm):
        if i == 0:
            prev = set(InvertedIndex[qterm])
            i = i+1
            continue
        prev = prev.intersection(set(InvertedIndex[qterm]))

results = list(prev)
#</getting results for a query

#We've got the results. Now lets rank them using Cosine similarity.
i = 0
docComponents = []
for doc in results:
        docComponents.append([])

i = 0    
for doc in results:
    for term in docs[doc]:
        vals = getValues4Term(term,doc)#[TermFrequency, DocumentFrequency]
        docComponents[i].append(vals)
    i = i+1
#Normalization = {}

# forming vectors for each document in the result
i = 0 #document iterator
j = 0 #term iterator
resultDocVectors = []#contains document vector for each result.

for doc in results:
        resultDocVectors.append([])

for i in range(0,len(results)):
    for j in range(0,len(docs[doc])):
        tf = docComponents[i][j][0]['natural']#0:TermFrequency
        idf = docComponents[i][j][1]['idf'] #1:DocumentFrequency        
        resultDocVectors[i].append(tf*idf)

#forming vector for query
qVector = []
qTF = []
qDF = []
for qterm in qterms:
    count = 0
    idx = qterms1.index(qterm)
    while idx < len(qterms1) and qterms1[idx] == qterm:
        count= count+1
        idx = idx+1
    qTF.append(count)
qVector = qTF    


#compuing Cosine similarities of all resultDocVectors w.r.t qVector
i = 0
CosineVals = []
for resultDocVector in resultDocVectors:
    doc = results[i]
    CosineVals.append(Cosine(resultDocVector, qVector, doc))
    i = i+1

#ranking as per Cosine Similarities
#this is not "perfect" sorting. As it may not give 100% correct results when it multiple docs have same cosine similarities.
CosineValsCopy = CosineVals
CosineVals.sort()
sortedCosineVals = CosineVals
CosineVals = CosineValsCopy
rankedResults = []
for cval in sortedCosineVals:    
    rankedResults.append(results[CosineVals.index(cval)])
rankedResults.reverse()

#<Evaluation of the system:>

#parsing qrels.txt & getting relevances
# qrels.txt contains columns of the form:
#       qid  iter  docno  rel
#2nd column `iter` can be ignored.
relevances = {}
fh = open("qrels.txt")
lines = fh.readlines()
for line in lines:
    cols = line.strip().split()
    if relevances.has_key(cols[0]):#queryID
        relevances[cols[0]].append(cols[2])#docID
    else:
        relevances[cols[0]] = [cols[2]]
fh.close()

#precision = no. of relevant docs retrieved/total no. of docs retrieved
no_of_relevant_docs_retrieved = set(rankedResults).intersection( set(relevances[queryID]) )
Precision = no_of_relevant_docs_retrieved/len(rankedResults)

#recall = no. of relevant docs retrieved/ total no. of relevant docs
Recall = no_of_relevant_docs_retrieved/len(relevances[queryID])

5 个答案:

答案 0 :(得分:18)

这绝对是你的代码,但既然你选择将它隐藏起来,我们就无法继续提供帮助。根据您选择提供的非常稀缺的信息,我可以告诉您的是,以一种正确的方式取消dict(以正确的方式)更快,并将其编入索引(假设您的意思是“搜索查询”)是< em> blazingly 快。根据这些数据,我推断出您的减速原因必须是您在代码中做的或做错的其他原因。

编辑:现在您已经发布了代码,我一眼就注意到,很多非常重要的代码都在模块顶层运行。实际上非常可怕,并且对性能有害:将所有非常重要的代码放入一个函数中,然后调用该函数 - 这本身就可以为您带来几十分之一的加速,而且复杂性为零。我必须在我的Stack Overflow帖子中提到至少20次关键事实,更不用说“果壳里的Python”等 - 当然如果你关心性能,你不能轻易忽略这些容易获得和广泛的信息吗?!

更容易修复的运行时浪费:

import pickle

使用cPickle(如果你不是在Python 2.6或2.7上,而是在3.1上,那么可能还有其他导致性能问题的原因 - 我不知道3.1这个时候有多精细调整与2.6和2.7的惊人表现相比。

除了global中的load语句之外,所有global语句都是无效的(不是严重的性能损失,但原则上应该删除冗余和无用的代码)。如果你想在函数中绑定一个模块全局变量,你只需loadInvertedIndex是唯一一个你正在做的事情。

更多修改

现在我们得到了更重要的内容:len(set(...))中的值似乎是文档列表,因此要知道文档在一个文档中出现的次数,您必须遍历它。为什么不将每个值设为从doc到出现次数的dict?没有循环(并且没有循环,您现在在InvertedIndex中执行值的len - 只有set(...)将是等效的并且您隐式地保存TF(term, doc)操作必须循环执行其工作)。这是一个大O优化,而不是“仅仅”加速20%左右,我到目前为止提到的事情可能会占到 - 即,这个更重要的东西,如我说。使用正确的数据结构和算法,许多轻微的低效率可能变得相对不重要;使用错误的,并且随着输入大小的增加,无法保存代码的性能,无论你如何巧妙地微量优化错误的数据结构和算法; - )。

更多:每次重复计算批次,“从头开始” - 例如,查看每次调用memoized的次数给定的术语和文档(并且每次调用都具有我刚才解释的关键效率低下)。作为解决这种巨大低效率的最快方法,请使用memoization - 例如,找到{{1}}装饰器here

好的,对我来说已经很晚了,我最好还是上床睡觉了 - 我希望以上部分或全部建议对你有用!

答案 1 :(得分:5)

Alex为您提供了有关算法更改的良好建议。我只想解决编写 fast python代码的问题。你应该两个都做。如果你只是简单地加入我的修改,你仍然会(基于亚历克斯所说的)有一个破碎的程序,但我现在还没有理解你的整个程序而且我想要进行微观优化。即使您最终抛弃了大量这些功能,将慢速实现与快速实现相比也可以帮助您快速编写新函数的实现。

采取以下功能:

def TF(term,doc):
    #Term Frequency: No. of times `term` occured in `doc`
    global InvertedIndex
    idx = InvertedIndex[term].index(doc)
    count = 0
    while (idx < len(InvertedIndex[term])) and InvertedIndex[term][idx] == doc:
        count= count+1
        idx = idx+1
    return count

将其重写为

def TF(term, doc):
    idx = InvertedIndex[term].index(doc)        
    return next(i + 1 for i, item in enumerate(InvertedIndex[term][idx:])
                if item != doc)

<击>

# Above struck out because the count method does the same thing and there was a bug
# in the implementation anyways.
InvertedIndex[term].count(doc)

这将创建一个生成器表达式,该表达式生成在doc的第一个索引之后出现并且不等于它的文档的有序索引集。您可以使用next函数计算第一个元素,这将是您的count

您肯定希望在文档中查找的一些功能。

您想要的一些语法

  • 生成器表达式(就像列表推导但mo'bettah(除非你需要一个列表;))
  • list comprehensions

最后但同样重要的是,最重要的(IMNAHAIPSBO)python模块:

这是另一个功能

def maxTF(term, doc):
    global docs
    TFs = []    
    for term in docs[doc]:
        TFs.append(TF(term,doc))
    return max(TFs)

你可以使用生成器表达式重写它:

def maxTF(term, doc):
    return max(TF(term, doc) for term in docs[doc])

生成器表达式的运行速度通常接近for循环的两倍。

最后,这是您的Cosine功能:

def Cosine(resultDocVector, qVector, doc):
    #`doc` parameter is the document number corresponding to resultDocVector
    global qterms,docs
    # Defining Cosine similarity : cos(a) = A.B/|A||B|

    dotProduct = 0
    commonTerms_q_d = set(qterms).intersection(docs[doc]) #commonTerms in both query & document
    for cmnTerm in commonTerms_q_d:
       dotProduct =  dotProduct + resultDocVector[docs[doc].index(cmnTerm)] * qVector[qterms.index(cmnTerm)]

    resultSquares = []
    for k in resultDocVector:
        resultSquares.append(k*k)

    qSquares = []
    for k in qVector:
        qSquares.append(k*k)

让我们把它重写为:

def Cosine(resultDocVector, qVector, doc):
    doc = docs[doc]
    commonTerms_q_d = set(qterms).intersection(doc)
    dotProduct = sum(resultDocVector[doc.index(cmnTerm)] *qVector[qterms.index(cmnTerm)]
                     for cmnTerm in commonTerms_q_d)

    denominator = sum(k**2 for k in resultDocVector)
    denominator *= sum(k**2 for k in qVector)
    denominator = math.sqrt(denominator) 

    return dotProduct/denominator

在这里,我们已经抛弃了每个for循环。表格的代码

lst = []
for item in other_lst:
    lst.append(somefunc(item))

是构建列表的最慢方法。首先,for / while循环开始很慢,并且附加到列表很慢。你有两个世界中最糟糕的。一个好的态度是代码就像对循环征税(性能明智,有)。只有你不能用map做一些事情或理解它才能付出代价,或者它让你的代码更具可读性,而且你知道它不是瓶颈。一旦你习惯了,理解就非常易读。

答案 2 :(得分:4)

这是以@aaronasterling的精神进行的更多微观优化。不过,我认为这些观察值得考虑。

使用适当的数据类型

stopwords应该是一套。您不能重复搜索列表并期望它快速。

使用更多套装。它们是可迭代的,就像列表一样,但是当你必须搜索它们时,它们比列表更快。


列表理解

resultSquares = [k*k for k in resultDocVector]
qSquares = [k*k for k in qVector]
TFs = [TF(term,doc) for term in docs[doc]]

发电机

转过来:

for qword in qwords:
    qword = qword.lower()
    if qword in stopwords:
        continue
    qterm = stemmer.stem(qword,0,len(qword)-1) 
    qterms1.append(qterm)
qterms = list(set(qterms1))

进入这个:

qworditer = (qword.lower() for qword in qwords if qword not in stopwords)
qtermiter = (stemmer.stem(qword,0,len(qword)-1) for qword in qworditer)
qterms1 = set([qterm for qterm in qtermiter])

使用生成器和reduce()

转过来:

prev = set()
i = 0
for qterm in qterms:
    if InvertedIndex.has_key(qterm):
        if i == 0:
            prev = set(InvertedIndex[qterm])
            i = i+1
            continue
        prev = prev.intersection(set(InvertedIndex[qterm]))

results = list(prev)

进入这个:

qtermiter = (set(InvertedIndex[qterm]) for qterm in qterms if qterm in InvertedIndex)
results = reduce(set.intersection, qtermiter)

使用列表推导

而不是:

i = 0
docComponents = []
for doc in results:
        docComponents.append([])

i = 0    
for doc in results:
    for term in docs[doc]:
        vals = getValues4Term(term,doc)#[TermFrequency, DocumentFrequency]
        docComponents[i].append(vals)
    i = i+1

写下这个:

docComponents = [getValues4Term(term,doc) for doc in results for term in docs[doc]]

这段代码毫无意义:

for doc in results:
        resultDocVectors.append([])

for i in range(0,len(results)):
    for j in range(0,len(docs[doc])):
        tf = docComponents[i][j][0]['natural']#0:TermFrequency
        idf = docComponents[i][j][1]['idf'] #1:DocumentFrequency        
        resultDocVectors[i].append(tf*idf)

len(docs[doc])取决于docdoc的值取决于循环for doc in results中最后达到的值。


使用collections.defaultdict

而不是:

relevances = {}
fh = open("qrels.txt")
lines = fh.readlines()
for line in lines:
    cols = line.strip().split()
    if relevances.has_key(cols[0]):#queryID
        relevances[cols[0]].append(cols[2])#docID
    else:
        relevances[cols[0]] = [cols[2]]

写这个(假设你的文件每行只有三个字段):

from collections import defaultdict
relevances = defaultdict(list)
with open("qrels.txt") as fh:
    lineiter = (line.strip().split() for line in fh)
    for queryID, _, docID in lineiter:
        relevances[queryID].append(docID)

正如许多其他人所说,记住你的计算。


2010-10-21:关于stopwords事情的更新。

from datetime import datetime

stopwords = ['a' , 'a\'s' , 'able' , 'about' , 'above' , 'according' , 'accordingly' , 'across' , 'actually' , 'after' , 'afterwards' , 'again' , 'against' , 'ain\'t' , 'all' , 'allow' , 'allows' , 'almost' , 'alone' , 'along' , 'already' , 'also' , 'although' , 'always' , 'am' , 'among' , 'amongst' , 'an' , 'and' , 'another' , 'any' , 'anybody' , 'anyhow' , 'anyone' , 'anything' , 'anyway' , 'anyways' , 'anywhere' , 'apart' , 'appear' , 'appreciate' , 'appropriate' , 'are' , 'aren\'t' , 'around' , 'as' , 'aside' , 'ask' , 'asking' , 'associated' , 'at' , 'available' , 'away' , 'awfully' , 'b' , 'be' , 'became' , 'because' , 'become' , 'becomes' , 'becoming' , 'been' , 'before' , 'beforehand' , 'behind' , 'being' , 'believe' , 'below' , 'beside' , 'besides' , 'best' , 'better' , 'between' , 'beyond' , 'both' , 'brief' , 'but' , 'by' , 'c' , 'c\'mon' , 'c\'s' , 'came' , 'can' , 'can\'t' , 'cannot' , 'cant' , 'cause' , 'causes' , 'certain' , 'certainly' , 'changes' , 'clearly' , 'co' , 'com' , 'come' , 'comes' , 'concerning' , 'consequently' , 'consider' , 'considering' , 'contain' , 'containing' , 'contains' , 'corresponding' , 'could' , 'couldn\'t' , 'course' , 'currently' , 'd' , 'definitely' , 'described' , 'despite' , 'did' , 'didn\'t' , 'different' , 'do' , 'does' , 'doesn\'t' , 'doing' , 'don\'t' , 'done' , 'down' , 'downwards' , 'during' , 'e' , 'each' , 'edu' , 'eg' , 'eight' , 'either' , 'else' , 'elsewhere' , 'enough' , 'entirely' , 'especially' , 'et' , 'etc' , 'even' , 'ever' , 'every' , 'everybody' , 'everyone' , 'everything' , 'everywhere' , 'ex' , 'exactly' , 'example' , 'except' , 'f' , 'far' , 'few' , 'fifth' , 'first' , 'five' , 'followed' , 'following' , 'follows' , 'for' , 'former' , 'formerly' , 'forth' , 'four' , 'from' , 'further' , 'furthermore' , 'g' , 'get' , 'gets' , 'getting' , 'given' , 'gives' , 'go' , 'goes' , 'going' , 'gone' , 'got' , 'gotten' , 'greetings' , 'h' , 'had' , 'hadn\'t' , 'happens' , 'hardly' , 'has' , 'hasn\'t' , 'have' , 'haven\'t' , 'having' , 'he' , 'he\'s' , 'hello' , 'help' , 'hence' , 'her' , 'here' , 'here\'s' , 'hereafter' , 'hereby' , 'herein' , 'hereupon' , 'hers' , 'herself' , 'hi' , 'him' , 'himself' , 'his' , 'hither' , 'hopefully' , 'how' , 'howbeit' , 'however' , 'i' , 'i\'d' , 'i\'ll' , 'i\'m' , 'i\'ve' , 'ie' , 'if' , 'ignored' , 'immediate' , 'in' , 'inasmuch' , 'inc' , 'indeed' , 'indicate' , 'indicated' , 'indicates' , 'inner' , 'insofar' , 'instead' , 'into' , 'inward' , 'is' , 'isn\'t' , 'it' , 'it\'d' , 'it\'ll' , 'it\'s' , 'its' , 'itself' , 'j' , 'just' , 'k' , 'keep' , 'keeps' , 'kept' , 'know' , 'knows' , 'known' , 'l' , 'last' , 'lately' , 'later' , 'latter' , 'latterly' , 'least' , 'less' , 'lest' , 'let' , 'let\'s' , 'like' , 'liked' , 'likely' , 'little' , 'look' , 'looking' , 'looks' , 'ltd' , 'm' , 'mainly' , 'many' , 'may' , 'maybe' , 'me' , 'mean' , 'meanwhile' , 'merely' , 'might' , 'more' , 'moreover' , 'most' , 'mostly' , 'much' , 'must' , 'my' , 'myself' , 'n' , 'name' , 'namely' , 'nd' , 'near' , 'nearly' , 'necessary' , 'need' , 'needs' , 'neither' , 'never' , 'nevertheless' , 'new' , 'next' , 'nine' , 'no' , 'nobody' , 'non' , 'none' , 'noone' , 'nor' , 'normally' , 'not' , 'nothing' , 'novel' , 'now' , 'nowhere' , 'o' , 'obviously' , 'of' , 'off' , 'often' , 'oh' , 'ok' , 'okay' , 'old' , 'on' , 'once' , 'one' , 'ones' , 'only' , 'onto' , 'or' , 'other' , 'others' , 'otherwise' , 'ought' , 'our' , 'ours' , 'ourselves' , 'out' , 'outside' , 'over' , 'overall' , 'own' , 'p' , 'particular' , 'particularly' , 'per' , 'perhaps' , 'placed' , 'please' , 'plus' , 'possible' , 'presumably' , 'probably' , 'provides' , 'q' , 'que' , 'quite' , 'qv' , 'r' , 'rather' , 'rd' , 're' , 'really' , 'reasonably' , 'regarding' , 'regardless' , 'regards' , 'relatively' , 'respectively' , 'right' , 's' , 'said' , 'same' , 'saw' , 'say' , 'saying' , 'says' , 'second' , 'secondly' , 'see' , 'seeing' , 'seem' , 'seemed' , 'seeming' , 'seems' , 'seen' , 'self' , 'selves' , 'sensible' , 'sent' , 'serious' , 'seriously' , 'seven' , 'several' , 'shall' , 'she' , 'should' , 'shouldn\'t' , 'since' , 'six' , 'so' , 'some' , 'somebody' , 'somehow' , 'someone' , 'something' , 'sometime' , 'sometimes' , 'somewhat' , 'somewhere' , 'soon' , 'sorry' , 'specified' , 'specify' , 'specifying' , 'still' , 'sub' , 'such' , 'sup' , 'sure' , 't' , 't\'s' , 'take' , 'taken' , 'tell' , 'tends' , 'th' , 'than' , 'thank' , 'thanks' , 'thanx' , 'that' , 'that\'s' , 'thats' , 'the' , 'their' , 'theirs' , 'them' , 'themselves' , 'then' , 'thence' , 'there' , 'there\'s' , 'thereafter' , 'thereby' , 'therefore' , 'therein' , 'theres' , 'thereupon' , 'these' , 'they' , 'they\'d' , 'they\'ll' , 'they\'re' , 'they\'ve' , 'think' , 'third' , 'this' , 'thorough' , 'thoroughly' , 'those' , 'though' , 'three' , 'through' , 'throughout' , 'thru' , 'thus' , 'to' , 'together' , 'too' , 'took' , 'toward' , 'towards' , 'tried' , 'tries' , 'truly' , 'try' , 'trying' , 'twice' , 'two' , 'u' , 'un' , 'under' , 'unfortunately' , 'unless' , 'unlikely' , 'until' , 'unto' , 'up' , 'upon' , 'us' , 'use' , 'used' , 'useful' , 'uses' , 'using' , 'usually' , 'uucp' , 'v' , 'value' , 'various' , 'very' , 'via' , 'viz' , 'vs' , 'w' , 'want' , 'wants' , 'was' , 'wasn\'t' , 'way' , 'we' , 'we\'d' , 'we\'ll' , 'we\'re' , 'we\'ve' , 'welcome' , 'well' , 'went' , 'were' , 'weren\'t' , 'what' , 'what\'s' , 'whatever' , 'when' , 'whence' , 'whenever' , 'where' , 'where\'s' , 'whereafter' , 'whereas' , 'whereby' , 'wherein' , 'whereupon' , 'wherever' , 'whether' , 'which' , 'while' , 'whither' , 'who' , 'who\'s' , 'whoever' , 'whole' , 'whom' , 'whose' , 'why' , 'will' , 'willing' , 'wish' , 'with' , 'within' , 'without' , 'won\'t' , 'wonder' , 'would' , 'would' , 'wouldn\'t' , 'x' , 'y' , 'yes' , 'yet' , 'you' , 'you\'d' , 'you\'ll' , 'you\'re' , 'you\'ve' , 'your' , 'yours' , 'yourself' , 'yourselves' , 'z' , 'zero']
print len(stopwords)
dictfile = '/usr/share/dict/american-english-huge'
with open(dictfile) as f:
    words = [line.strip() for line in f]

print len(words)

s = datetime.now()
total = sum(1 for word in words if word in stopwords)
e = datetime.now()
elapsed = e - s
print elapsed, total

s = datetime.now()
stopwords_set = set(stopwords)
total = sum(1 for word in words if word in stopwords_set)
e = datetime.now()
elapsed = e - s
print elapsed, total

我得到了这些结果:

# Using list
>>> print elapsed, total
0:00:06.902529 542

# Using set
>>> print elapsed, total
0:00:00.050676 542

相同数量的结果,但一个运行速度快近140倍。当然,您可能没有太多的单词可以与您的stopwords进行比较,而对于您的30分钟运行时间,6秒可以忽略不计。但它确实强调,使用适当的数据结构可以加速您的代码。

答案 3 :(得分:2)

很高兴您发布代码以回应人们的要求,但除非他们运行并对其进行分析,否则他们能做的最好的就是猜测。我也可以猜测,但即使猜测是“好”或“受过教育”,它们也不是找到性能问题的好方法。

我宁愿推荐你technique that will pinpoint the problem。这比猜测或要求别人猜测更有效。一旦你自己发现问题完全,你可以决定是否使用memoization或其他任何东西来修复它。

通常存在多个问题。如果您重复查找和删除性能问题的过程,您将接近真正的最佳状态。

答案 4 :(得分:1)

Python会缓存功能结果吗?我不这么认为。在这种情况下,在TF(term,doc)中多次运行getValues4Term()这样的循环函数可能是个坏主意。当您将结果放入变量时,您可能已经获得了巨大的速度升级。结合

for doc in results:
    for term in docs[doc]:
        vals = getValues4Term(term,doc)

可能已经是最大的速度问题了。