使用bigrams

时间:2015-06-11 06:24:35

标签: r text-mining tm tf-idf lda

我有一个csv,每行都是一个文档。我需要在此基础上执行LDA。我有以下代码:

library(tm)
library(SnowballC)
library(topicmodels)
library(RWeka)

X = read.csv('doc.csv',sep=",",quote="\"",stringsAsFactors=FALSE)

corpus <- Corpus(VectorSource(X))
corpus <- tm_map(tm_map(tm_map(corpus, stripWhitespace), tolower), stemDocument)
corpus <- tm_map(corpus, PlainTextDocument)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
dtm <- DocumentTermMatrix(corpus, control = list(tokenize=BigramTokenizer,weighting=weightTfIdf))

此时检查dtm对象给出了

<<DocumentTermMatrix (documents: 52, terms: 477)>>
Non-/sparse entries: 492/24312
Sparsity           : 98%
Maximal term length: 20
Weighting          : term frequency - inverse document frequency (normalized) (tf-idf)

现在我继续执行LDA

rowTotals <- apply(dtm , 1, sum) 
dtm.new   <- dtm[rowTotals> 0, ]
g = LDA(dtm.new,10,method = 'VEM',control=NULL,model=NULL)

我收到以下错误

Error in LDA(dtm.new, 10, method = "VEM", control = NULL, model = NULL) : 
  The DocumentTermMatrix needs to have a term frequency weighting

文档术语矩阵明显加权。我做错了什么?

请帮助。

1 个答案:

答案 0 :(得分:1)

文档术语矩阵需要有一个术语频率加权:

DocumentTermMatrix(corpus, 
                   control = list(tokenize = BigramTokenizer, 
                             weighting = weightTf))