使用gensim将LDA应用于语料库进行训练

时间:2013-04-28 04:37:22

标签: python lda gensim

我有大约20,000个文档的语料库,我必须使用LDA训练该主题建模的数据集。

import logging, gensim

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
id2word = gensim.corpora.Dictionary('questions.dict')
mm = gensim.corpora.MmCorpus('questions.mm')
lda = gensim.models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=100, update_every=0, chunksize=3000, passes=20)
lda.print_topics(20)

每当我运行此程序时,我都会遇到此错误:

2013-04-28 09:57:09,750 : INFO : adding document #0 to Dictionary(0 unique tokens)
2013-04-28 09:57:09,759 : INFO : built Dictionary(11 unique tokens) from 14 documents (total 14 corpus positions)
2013-04-28 09:57:09,785 : INFO : loaded corpus index from questions.mm.index
2013-04-28 09:57:09,790 : INFO : initializing corpus reader from questions.mm
2013-04-28 09:57:09,796 : INFO : accepted corpus with 19188 documents, 15791 features, 106222 non-zero entries
2013-04-28 09:57:09,802 : INFO : using serial LDA version on this node
2013-04-28 09:57:09,808 : INFO : running batch LDA training, 100 topics, 20 passes over the supplied corpus of 19188 documents, updating model once every 19188 documents
2013-04-28 09:57:10,267 : INFO : PROGRESS: iteration 0, at document #3000/19188

Traceback (most recent call last):
File "C:/Users/Animesh/Desktop/NLP/topicmodel/lda.py", line 10, in <module>
lda = gensim.models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=100, update_every=0, chunksize=3000, passes=20)
File "C:\Python27\lib\site-packages\gensim-0.8.6-py2.7.egg\gensim\models\ldamodel.py", line 265, in __init__
self.update(corpus)
File "C:\Python27\lib\site-packages\gensim-0.8.6-py2.7.egg\gensim\models\ldamodel.py", line 445, in update
self.do_estep(chunk, other)
File "C:\Python27\lib\site-packages\gensim-0.8.6-py2.7.egg\gensim\models\ldamodel.py", line 365, in do_estep
gamma, sstats = self.inference(chunk, collect_sstats=True)
File "C:\Python27\lib\site-packages\gensim-0.8.6-py2.7.egg\gensim\models\ldamodel.py", line 318, in inference
expElogbetad = self.expElogbeta[:, ids]
IndexError: index (11) out of range (0<=index<10) in dimension 1

我甚至尝试更改LdaModel函数中的值,但我总是得到相同的错误!

应该做什么?

1 个答案:

答案 0 :(得分:2)

您的词典(id2word)似乎未与您的语料库对象(mm)正确匹配。

无论出于何种原因,id2word(单词标记到单词标记的映射)仅包含11个标记     2013-04-28 09:57:09,759 : INFO : built Dictionary(11 unique tokens) from 14 documents (total 14 corpus positions)

您的语料库包含15791个功能,因此当它查找ID为&gt;的功能时10,它失败了。 ids     expElogbetad = self.expElogbeta[:, ids] 是特定文档中所有单词ID的列表。

我将重新创建语料库和字典:

$ python -m gensim.scripts.make_wiki (来自gensim LDA教程)。

创建的字典的日志记录数据应该表明我认为远远超过11个令牌。我自己遇到了类似的问题。