如何使用Gibbs采样计算LDA的困惑度

时间:2018-07-09 18:04:46

标签: r lda topic-modeling perplexity

我在R上对200多个文档(共65k字)的集合执行LDA主题模型。这些文档已经过预处理,并存储在文档术语矩阵dtm中。从理论上讲,我应该期望在语料库中找到5个不同的主题,但是我想计算困惑度得分,并查看模型如何随着主题数量的变化而变化。下面是我使用的代码。问题是,当我尝试计算困惑度分数时,它给了我一个错误,我不确定如何解决(我是R的新手)。错误在代码的最后一行。我将不胜感激。

burnin <- 4000  #burn-in parameter
iter <- 2000    # #of iteration after burn-in
thin <- 500     #take every 500th iteration for further use to avoid correlations between samples
seed <-list(2003,10,100,10005,765)
nstart <- 5     #use 5 different starting points
best <- TRUE    #return results of the run with the highest posterior probability

#Number of topics (run the algorithm for different values of k and make a choice based by inspecting the results)
k <- 5

#Run LDA using Gibbs sampling
ldaOut <-LDA(dtm,k, method="Gibbs", 
             control=list(nstart=nstart, seed = seed, best=best, 
                          burnin = burnin, iter = iter, thin=thin))

 perplexity(ldaOut, newdata = dtm)

Error in method(x, k, control, model, mycall, ...) : Need 1 seeds

1 个答案:

答案 0 :(得分:1)

还需要一个参数“ estimate_theta”,

使用以下代码:

perplexity(ldaOut, newdata = dtm,estimate_theta=FALSE)