如何在主题建模中删除不必要的信息(lda)

时间:2017-09-14 09:21:38

标签: r data-mining text-mining lda topic-modeling

如何删除主题建模(lda)中不必要的信息

您好我想创建主题建模。 我的数据是这种结构。

1. Doesn't taste good to me.
2. Most delicious ramen I have ever had. Spicy and tasty. Great price too.
3. I have this on my subscription, my family loves this version. The taste is great by itself or when we add the vegetables and.or meats.
4. The noodle is ok, but I had better ones.
5. some day's this is lunch and or dinner  on second case
6. Really good ramen!

我清理了评论并转向主题建模。但你可以看到" "," 26.6564810276031","字符(0)"。

[,1]             [,2]                [,3]            [,4]                 
 [1,] "cabbag"  ")."                "="             "side"                        
 [2,] "gonna"   "26.6564810276031," ""              "day,"              
 [3,] "broth"   "figur"             "character(0)," "ok."

最初,如果您只有单词的频率,则无法看到这些内容,但在运行主题建模时可以看到这些单词。

我怎么了? 我该如何解决?

library(tm)
library(XML)
library(SnowballC)

crudeCorp<-VCorpus(VectorSource(readLines(file.choose())))
crudeCorp <- tm_map(crudeCorp, stripWhitespace)
crudeCorp<-tm_map(crudeCorp, content_transformer(tolower))



# remove stopwords from corpus
crudeCorp<-tm_map(crudeCorp, removeWords, stopwords("english"))
myStopwords <- c(stopwords("english"),"noth","two","first","lot", "because", "can", "will","go","also","get","since","way","even","just","now","will","give","gave","got","one","make","even","much","come","take","without","goes","along","alot","alone")
myStopwords <- setdiff(myStopwords, c("will","can"))

crudeCorp <- tm_map(crudeCorp, removeWords, myStopwords)
crudeCorp<-tm_map(crudeCorp,removeNumbers)

crudeCorp <- tm_map(crudeCorp, content_transformer(function(x) 
  gsub(x, pattern = "bought", replacement = "buy")))
crudeCorp <- tm_map(crudeCorp, content_transformer(function(x) 
  gsub(x, pattern = "broke", replacement = "break")))
crudeCorp <- tm_map(crudeCorp, content_transformer(function(x) 
  gsub(x, pattern = "products", replacement = "product")))
crudeCorp <- tm_map(crudeCorp, content_transformer(function(x) 
  gsub(x, pattern = "made", replacement = "make")))


crudeCorp <- tm_map(crudeCorp, stemDocument)



library(reshape)
library(ScottKnott)
library(lda)





### Faster Way of doing LDA 
corpusLDA <- lexicalize(crudeCorp)

## K: Number of factors, ,vocab=corpusLDA$vocab (Word contents)

ldaModel=lda.collapsed.gibbs.sampler(corpusLDA$documents,K=7,
vocab=corpusLDA$vocab,burnin=9999,num.iterations=1000,alpha=1,eta=0.1)

top.words <- top.topic.words(ldaModel$topics, 10, by.score=TRUE)
print(top.words) 

2 个答案:

答案 0 :(得分:0)

有两种可能的方法。您可以在停用词集中添加不需要的单词,或使用正则表达式在标记化期间删除非字母数字标记。

答案 1 :(得分:0)

您可以应用下面描述的简单三步机制。

  1. 通过不同的分隔符(如空格)从文本中提取单词, &#39;。&#39;,&#39;,&#39;。
  2. 通过更正单词来清除文本,删除 停止词,词干,......
  3. 使用一些措施来找出每个单词的重要性,例如术语频率(TF),反向文档频率(IDF),TFIDF和......