从DocumentTermMatrix删除停止短语

时间:2018-07-13 19:19:58

标签: r n-gram topic-modeling corpus stop-words

下面,我对“粗略”数据进行基本主题建模。我知道我可以使用tm_map删除停用词,但是我不知道在 出现bigram标记化之后该如何做。

library(topicmodels)
library(tm)
library(RWeka)
library(ggplot2)
library(dplyr)
library(tidytext)

data("crude")
words <- tm_map(crude, content_transformer(tolower))
words <- tm_map(words, removePunctuation)
words <- tm_map(words, stripWhitespace)

BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 2))

#bigram tokenization
dtm <- DocumentTermMatrix(words,control = list(tokenize = BigramTokenizer))
ui = unique(dtm$i) 
dtm = dtm[ui,] #remove "empty" tweets

lda <- LDA(dtm, k = 2,control = list(seed = 7272))

topics <- tidy(lda, matrix = "beta")

##Graphs
top_terms <- topics %>%
  group_by(topic) %>%
  top_n(10, beta) %>%
  ungroup() %>%
  arrange(topic, -beta)

top_terms %>%
  mutate(term = reorder(term, beta)) %>%
  ggplot(aes(term, beta, fill = factor(topic))) +
  geom_col(show.legend = FALSE) +
  facet_wrap(~ topic, scales = "free") +
  coord_flip()

#single
stopwords1<- stopwords("english") ##I actually use a custom list: read.csv("stopwords.txt", header = FALSE)
adnlstopwords1<-c("ny","new","york","yorks","state","nyc","nys")

#doubles
stopwords2<-levels(interaction(stopwords1,stopwords1,sep=' '))
adnlstopwords2<-c(stopwords2,c("new york", "york state", "in ny", "in new",
                  "new yorks"))

stopwords<-c(stopwords,adnlstopwords1,stopwords2,adnlstopwords2)

我的问题是如何从dtm中删除这些二元组,而不使用tm_map或可能的解决方法。请注意,基于“纽约”的二元组可能不会出现在原始数据中,但对我的其他数据很重要。

1 个答案:

答案 0 :(得分:0)

我从R中的“ gofastR”包中找到了该解决方案:

dtm2 <- remove_stopwords(dtm, stopwords = stopwords)

但是,我仍然在结果中看到停止短语。在查看了文档之后,remove_stopwords假定它具有一个排序列表-您可以使用同一包中的prep_stopwords()函数准备停用词/短语。

stopwords<-prep_stopwords(stopwords)
dtm2 <- remove_stopwords(dtm, stopwords = stopwords)

为此,。我们可以在代码的tm_map部分中执行词干分析,并删除步骤词,如下所示:

stopwords<-prep_stopwords(stemDocument(stopwords))
dtm2 <- remove_stopwords(dtm, stopwords = stopwords)

因为这将阻止停用词,然后将匹配dtm中已经阻止的词。