我试图找到一个实际上可以找到R文本挖掘包中最常用的两个和三个单词短语的代码(可能还有另一个我不知道的包)。我一直在尝试使用标记器,但似乎没有运气。
如果您过去曾处理过类似的情况,您是否可以发布经过测试且实际有效的代码?非常感谢你!
答案 0 :(得分:11)
您可以将自定义标记功能传递给tm
的{{1}}函数,因此如果您安装了软件包DocumentTermMatrix
,则相当简单。
tau
library(tm); library(tau);
tokenize_ngrams <- function(x, n=3) return(rownames(as.data.frame(unclass(textcnt(x,method="string",n=n)))))
texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
corpus <- Corpus(VectorSource(texts))
matrix <- DocumentTermMatrix(corpus,control=list(tokenize=tokenize_ngrams))
函数中的n
是每个词组的单词数。此功能也在包tokenize_ngrams
中实现,这进一步简化了操作。
RTextTools
这会返回一个library(RTextTools)
texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
matrix <- create_matrix(texts,ngramLength=3)
类,用于包DocumentTermMatrix
。
答案 1 :(得分:8)
5。我可以在术语文档矩阵中使用bigrams而不是单个令牌吗?
是。 RWeka为任意n-gram提供了一个标记器 直接传递给term-document矩阵构造函数。 E.g:
library("RWeka")
library("tm")
data("crude")
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
tdm <- TermDocumentMatrix(crude, control = list(tokenize = BigramTokenizer))
inspect(tdm[340:345,1:10])
答案 2 :(得分:3)
这是我自己创作的用于不同目的的创作,但我认为也可能适用于您的需求:
#User Defined Functions
Trim <- function (x) gsub("^\\s+|\\s+$", "", x)
breaker <- function(x) unlist(strsplit(x, "[[:space:]]|(?=[.!?*-])", perl=TRUE))
strip <- function(x, digit.remove = TRUE, apostrophe.remove = FALSE){
strp <- function(x, digit.remove, apostrophe.remove){
x2 <- Trim(tolower(gsub(".*?($|'|[^[:punct:]]).*?", "\\1", as.character(x))))
x2 <- if(apostrophe.remove) gsub("'", "", x2) else x2
ifelse(digit.remove==TRUE, gsub("[[:digit:]]", "", x2), x2)
}
unlist(lapply(x, function(x) Trim(strp(x =x, digit.remove = digit.remove,
apostrophe.remove = apostrophe.remove)) ))
}
unblanker <- function(x)subset(x, nchar(x)>0)
#Fake Text Data
x <- "I like green eggs and ham. They are delicious. They taste so yummy. I'm talking about ham and eggs of course"
#The code using Base R to Do what you want
breaker(x)
strip(x)
words <- unblanker(breaker(strip(x)))
textDF <- as.data.frame(table(words))
textDF$characters <- sapply(as.character(textDF$words), nchar)
textDF2 <- textDF[order(-textDF$characters, textDF$Freq), ]
rownames(textDF2) <- 1:nrow(textDF2)
textDF2
subset(textDF2, characters%in%2:3)
答案 3 :(得分:2)
语料库库有一个名为term_stats
的函数可以执行您想要的操作:
library(corpus)
corpus <- gutenberg_corpus(55) # Project Gutenberg #55, _The Wizard of Oz_
text_filter(corpus)$drop_punct <- TRUE # ignore punctuation
term_stats(corpus, ngrams = 2:3)
## term count support
## 1 of the 336 1
## 2 the scarecrow 208 1
## 3 to the 185 1
## 4 and the 166 1
## 5 said the 152 1
## 6 in the 147 1
## 7 the lion 141 1
## 8 the tin 123 1
## 9 the tin woodman 114 1
## 10 tin woodman 114 1
## 11 i am 84 1
## 12 it was 69 1
## 13 in a 64 1
## 14 the great 63 1
## 15 the wicked 61 1
## 16 wicked witch 60 1
## 17 at the 59 1
## 18 the little 59 1
## 19 the wicked witch 58 1
## 20 back to 57 1
## ⋮ (52511 rows total)
此处,count
是出现次数,support
是包含该字词的文档数。
答案 4 :(得分:1)
我使用tm
和ngram
包添加了类似的问题。
在调试mclapply
之后,我看到那里有少于2个字的文档出现问题,并出现以下错误
input 'x' has nwords=1 and n=2; must have nwords >= n
所以我添加了一个过滤器来删除低字数的文档:
myCorpus.3 <- tm_filter(myCorpus.2, function (x) {
length(unlist(strsplit(stringr::str_trim(x$content), '[[:blank:]]+'))) > 1
})
然后我的tokenize函数看起来像:
bigramTokenizer <- function(x) {
x <- as.character(x)
# Find words
one.list <- c()
tryCatch({
one.gram <- ngram::ngram(x, n = 1)
one.list <- ngram::get.ngrams(one.gram)
},
error = function(cond) { warning(cond) })
# Find 2-grams
two.list <- c()
tryCatch({
two.gram <- ngram::ngram(x, n = 2)
two.list <- ngram::get.ngrams(two.gram)
},
error = function(cond) { warning(cond) })
res <- unlist(c(one.list, two.list))
res[res != '']
}
然后你可以用以下方法测试这个功能:
dtmTest <- lapply(myCorpus.3, bigramTokenizer)
最后:
dtm <- DocumentTermMatrix(myCorpus.3, control = list(tokenize = bigramTokenizer))
答案 5 :(得分:1)
尝试tidytext包
library(dplyr)
library(tidytext)
library(janeaustenr)
library(tidyr
)
假设我有一个包含注释列的数据框CommentData,我想在一起找到两个单词的出现。然后尝试
bigram_filtered <- CommentData %>%
unnest_tokens(bigram, Comment, token= "ngrams", n=2) %>%
separate(bigram, c("word1","word2"), sep=" ") %>%
filter(!word1 %in% stop_words$word,
!word2 %in% stop_words$word) %>%
count(word1, word2, sort=TRUE)
上面的代码创建了令牌,然后删除了在分析中没有帮助的停用词(例如,a,an,to等)然后你计算这些词的出现次数。然后,您将使用unite函数组合单个单词并记录它们的出现。
bigrams_united <- bigram_filtered %>%
unite(bigram, word1, word2, sep=" ")
bigrams_united
答案 6 :(得分:0)
试试这段代码。
library(tm)
library(SnowballC)
library(class)
library(wordcloud)
keywords <- read.csv(file.choose(), header = TRUE, na.strings=c("NA","-","?"))
keywords_doc <- Corpus(VectorSource(keywords$"use your column that you need"))
keywords_doc <- tm_map(keywords_doc, removeNumbers)
keywords_doc <- tm_map(keywords_doc, tolower)
keywords_doc <- tm_map(keywords_doc, stripWhitespace)
keywords_doc <- tm_map(keywords_doc, removePunctuation)
keywords_doc <- tm_map(keywords_doc, PlainTextDocument)
keywords_doc <- tm_map(keywords_doc, stemDocument)
这是你可以使用的bigrams或trikn部分
BigramTokenizer <- function(x)
unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), use.names = FALSE)
# creating of document matrix
keywords_matrix <- TermDocumentMatrix(keywords_doc, control = list(tokenize = BigramTokenizer))
# remove sparse terms
keywords_naremoval <- removeSparseTerms(keywords_matrix, 0.95)
# Frequency of the words appearing
keyword.freq <- rowSums(as.matrix(keywords_naremoval))
subsetkeyword.freq <-subset(keyword.freq, keyword.freq >=20)
frequentKeywordSubsetDF <- data.frame(term = names(subsetkeyword.freq), freq = subsetkeyword.freq)
# Sorting of the words
frequentKeywordDF <- data.frame(term = names(keyword.freq), freq = keyword.freq)
frequentKeywordSubsetDF <- frequentKeywordSubsetDF[with(frequentKeywordSubsetDF, order(-frequentKeywordSubsetDF$freq)), ]
frequentKeywordDF <- frequentKeywordDF[with(frequentKeywordDF, order(-frequentKeywordDF$freq)), ]
# Printing of the words
wordcloud(frequentKeywordDF$term, freq=frequentKeywordDF$freq, random.order = FALSE, rot.per=0.35, scale=c(5,0.5), min.freq = 30, colors = brewer.pal(8,"Dark2"))
希望这会有所帮助。这是您可以使用的完整代码。