DocumentTermMatrix需要具有术语频率加权误差

时间:2015-11-18 01:32:40

标签: r lda topicmodels

我正在尝试在相当大的数据集上使用topicmodels包中的LDA()。在尝试了所有修复以下错误后“在nr * nc:由整数溢出产生的NAs”和“输入矩阵的每一行需要包含至少一个非零条目”之后,我最终得到了这个错误。

ask<- read.csv('askreddit201508.csv', stringsAsFactors = F)    
myDtm <- create_matrix(as.vector(ask$title), language="english", removeNumbers=TRUE, stemWords=TRUE, weighting=weightTf)
myDtm2 = removeSparseTerms(myDtm,0.99999)
myDtm2 <- rollup(myDtm2, 2, na.rm=TRUE, FUN = sum)
rowTotals <- apply(myDtm2 , 1, sum)
myDtm2   <- myDtm2[rowTotals> 0, ]  
LDA2 <- LDA(myDtm2,100)

Error in LDA(myDtm2, 100) : 
  The DocumentTermMatrix needs to have a term frequency weighting

2 个答案:

答案 0 :(得分:4)

部分问题在于您通过 tf-idf 对文档术语矩阵进行加权,但LDA需要术语计数。此外,这种删除稀疏术语的方法似乎是创建了一些文档,其中所有术语都已被删除。

使用 quanteda 包更容易从文本转到主题模型。以下是:

require(quanteda)
myCorpus <- corpus(textfile("http://homepage.stat.uiowa.edu/~thanhtran/askreddit201508.csv",
                            textField = "title"))
myDfm <- dfm(myCorpus, stem = TRUE)
## Creating a dfm from a corpus ...
##    ... lowercasing
##    ... tokenizing
##    ... indexing documents: 160,707 documents
##    ... indexing features: 39,505 feature types
##    ... stemming features (English), trimmed 12563 feature variants
##    ... created a 160707 x 26942 sparse dfm
##    ... complete. 

# remove infrequent terms: see http://stats.stackexchange.com/questions/160539/is-this-interpretation-of-sparsity-accurate/160599#160599
sparsityThreshold <- round(ndoc(myDfm) * (1 - 0.99999))
myDfm2 <- trim(myDfm, minDoc = sparsityThreshold)
## Features occurring in fewer than 1.60707 documents: 12579
nfeature(myDfm2)
## [1] 14363

# fit the LDA model
require(topicmodels)
LDA2 <- LDA(quantedaformat2dtm(myDfm2), 100)

答案 1 :(得分:1)

all.dtm <-DocumentTermMatrix(corpus,                               控制=列表(权重= weightTf));检查(all.dtm)

tpc.mdl.LDA <-LDA(all.dtm,k =主题数)