我正在尝试使用RWeka NGramTokenizer功能从火车语料库中提取1克,2克和3克。不幸的是,只得到1克。有我的代码:
train_corpus
# clean-up
cleanset1<- tm_map(train_corpus, tolower)
cleanset2<- tm_map(cleanset1, removeNumbers)
cleanset3<- tm_map(cleanset2, removeWords, stopwords("english"))
cleanset4<- tm_map(cleanset3, removePunctuation)
cleanset5<- tm_map(cleanset4, stemDocument, language="english")
cleanset6<- tm_map(cleanset5, stripWhitespace)
# 1-gram
NgramTokenizer1 <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 1))
train_dtm_tf_1g <- DocumentTermMatrix(cleanset6, control=list(tokenize=NgramTokenizer1))
dim(train_dtm_tf_1g)
[1] 5905 15322
# 2-gram
NgramTokenizer2 <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
train_dtm_tf_2g <- DocumentTermMatrix(cleanset6, control=list(tokenize=NgramTokenizer2))
dim(train_dtm_tf_2g)
[1] 5905 15322
# 3-gram
NgramTokenizer3 <- function(x) NGramTokenizer(x, Weka_control(min = 3, max = 3))
train_dtm_tf_3g <- DocumentTermMatrix(cleanset6, control=list(tokenize=NgramTokenizer3))
dim(train_dtm_tf_3g)
[1] 5905 15322
每次都得到相同的结果,这显然是错误的。
# combining together 1-gram, 2-gram and 3-gram from corpus
NgramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
train_dtm_tf_ng <- DocumentTermMatrix(cleanset6, control=list(tokenize=NgramTokenizer))
dim(train_dtm_tf_ng)
[1] 5905 15322
# A numeric for the maximal allowed sparsity in the range from bigger zero to smaller one
train_rmspa_m_tf_ng_95<-removeSparseTerms(train_dtm_tf_ng, 0.95)
[1] 5905 172
# creat bag of words (BOW) vector of these terms for use later
train_BOW_3g_95 <- findFreqTerms(train_rmspa_m_tf_3g_95)
# take a look at the terms that appear in the last 5% of the instances
train_BOW_3g_95
[1] "avg" "februari" "januari" "level" "nation" "per" "price"
[8] "rate" "report" "reserv" "reuter" "also" "board" "export"
[15] "march" "may" "month" "oil" "product" "total" "annual"
[22] "approv" "april" "capit" "common" "compani" "five" "inc"
[29] "increas" "meet" "mln" "record" "said" "share" "sharehold"
[36] "stock" "acquir" "addit" "buy" "chang" "complet" "continu"
...
仅1克。 我尝试用以下方式重写我的命令:
NgramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
但没有成功。还尝试添加另一行:
options(mc.cores=1)
在NgramTokenizer发出命令之前,没有任何变化。
有帮助吗?
答案 0 :(得分:3)
我今天遇到了同样的问题。由于某些原因,似乎“tm_map”与SimpleCorpus不兼容。
我从
更改了我的代码corpus = Corpus(VectorSource(pd_cmnt$QRating_Explaination))
到
corpus = VCorpus(VectorSource(pd_cmnt$QRating_Explaination))
现在它可以工作并且正确地回复2克。