过滤掉在gensim字典中恰好出现一次的标记

时间:2014-02-27 20:18:06

标签: python-2.7 gensim

gensim字典对象具有非常好的过滤功能,可以删除少于一定数量文档的标记。但是,我希望删除在语料库中只出现一次的令牌。有没有人知道这样快速简便的方法?

4 个答案:

答案 0 :(得分:4)

您应该在问题中包含一些可重现的代码;但是,我将使用上一篇文章中的文件。我们可以在不使用gensim的情况下实现您的目标。

from collections import defaultdict
documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
              "The EPS user interface management system",
              "System and human system engineering testing of EPS",
              "Relation of user perceived response time to error measurement",
              "The generation of random binary unordered trees",
              "The intersection graph of paths in trees",
              "Graph minors IV Widths of trees and well quasi ordering",
              "Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents]

# word frequency
d=defaultdict(int)
for lister in texts:
    for item in lister:
        d[item]+=1

# remove words that appear only once
tokens=[key for key,value in d.items() if value>1]
texts = [[word for word in document if word in tokens] for document in texts]


要添加一些信息,您可能会认为除了前面提到的方法之外,gensim教程还具有更高效的内存技术。我添加了一些打印语句,以便您可以看到每一步都发生了什么。您在DICTERATOR步骤中回答了您的具体问题;我意识到以下答案可能对您的问题有些过分,但如果您需要进行任何类型的主题建模,那么此信息将朝着正确的方向迈出一步。

$cat mycorpus.txt

Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey  

运行以下create_corpus.py:

#!/usr/bin/env python
from gensim import corpora, models, similarities

stoplist = set('for a of the and to in'.split())

class MyCorpus(object):
    def __iter__(self):
        for line in open('mycorpus.txt'):
            # assume there's one document per line, tokens separated by whitespace
            yield dictionary.doc2bow(line.lower().split()) 

# TOKENIZERATOR: collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
print (dictionary)
print (dictionary.token2id)

# DICTERATOR: remove stop words and words that appear only once 
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq == 1]
dictionary.filter_tokens(stop_ids + once_ids)
print (dictionary)
print (dictionary.token2id)

dictionary.compactify() # remove gaps in id sequence after words that were removed
print (dictionary)
print (dictionary.token2id)

# VECTORERATOR: map tokens frequency per doc to vectors
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
for item in corpus_memory_friendly:
    print item
祝你好运!

答案 1 :(得分:2)

您可能想要查找gensim字典filter_extremes方法:

filter_extremes(no_below = 5,no_above = 0.5,keep_n = 100000)

答案 2 :(得分:0)

在Gensim tutorial中找到了这个:

from gensim import corpora, models, similarities

documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
         for document in documents]

# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once]
        for text in texts]
print texts

[['human', 'interface', 'computer'],
 ['survey', 'user', 'computer', 'system', 'response', 'time'],
 ['eps', 'user', 'interface', 'system'],
 ['system', 'human', 'system', 'eps'],
 ['user', 'response', 'time'],
 ['trees'],
 ['graph', 'trees'],
 ['graph', 'minors', 'trees'],
 ['graph', 'minors', 'survey']]

基本上,遍历包含整个语料库的列表,如果每个单词只出现一次,则将其添加到令牌列表中。然后遍历每个文档中的每个单词,如果它出现在语料库中出现一次的标记列表中,则删除该单词。

我认为这是最好的方法,否则教程会提到别的东西。但我可能是错的。

答案 3 :(得分:0)

def get_term_frequency(dictionary,cutoff_freq):
    """This returns a list of tuples (term,frequency) after removing all tuples with frequency smaller than cutoff_freq
       dictionary (gensim.corpora.Dictionary): corpus dictionary
       cutoff_freq (int): terms with whose frequency smaller than this will be dropped
    """
    tf = []
    for k,v in dictionary.dfs.iteritems():
        tf.append((str(dictionary.get(k)),v))
    return reduce(lambda t:t[1]>cutoff_freq)