在tm包(R)中使用大型自定义停用词列表的问题

时间:2016-11-17 10:22:27

标签: r tm

我相信很多人之前已经看过这个:

Warnmeldung:

In mclapply(content(x), FUN, ...) :

  all scheduled cores encountered errors in user code

这次,当我尝试从语料库中删除自定义停用词列表时,我收到错误。

asdf <- tm_map(asdf, removeWords ,mystops)

它适用于小型禁用词列表(我尝试过100次或者其他),但我目前的禁用词列表大约有42000个单词。

我试过这个:

asdf <- tm_map(asdf, removeWords ,mystops, lazy=T)

这不会给我一个错误,但是之后的每个tm_map命令都会给我上面的错误,当我尝试从语料库中计算DTM时:

Fehler in UseMethod("meta", x) : 

  nicht anwendbare Methode für 'meta' auf Objekt der Klasse "try-error" angewendet

Zusätzlich: Warnmeldung:

In mclapply(unname(content(x)), termFreq, control) :

  all scheduled cores encountered errors in user code

我正在考虑一个函数,为我的列表的一小部分循环removeWords命令,但我也想了解,为什么列表的长度是个问题..

这里是我的sessionInfo():

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X El Capitan 10.11.6

locale:
[1] de_DE.UTF-8/de_DE.UTF-8/de_DE.UTF-8/C/de_DE.UTF-8/de_DE.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] SnowballC_0.5.1    wordcloud_2.5      RColorBrewer_1.1-2 RTextTools_1.4.2   SparseM_1.74       topicmodels_0.2-4  tm_0.6-2          
[8] NLP_0.1-9         

loaded via a namespace (and not attached):
 [1] Rcpp_0.12.7         splines_3.3.2       MASS_7.3-45         tau_0.0-18          prodlim_1.5.7       lattice_0.20-34     foreach_1.4.3      
 [8] tools_3.3.2         caTools_1.17.1      nnet_7.3-12         parallel_3.3.2      grid_3.3.2          ipred_0.9-5         glmnet_2.0-5       
[15] e1071_1.6-7         iterators_1.0.8     modeltools_0.2-21   class_7.3-14        survival_2.39-5     randomForest_4.6-12 Matrix_1.2-7.1     
[22] lava_1.4.5          bitops_1.0-6        codetools_0.2-15    maxent_1.3.3.1      rpart_4.1-10        slam_0.1-38         stats4_3.3.2       
[29] tree_1.0-37  

修改

20 newsgroups dataset

我使用20news-bydate.tar.gz,只使用火车文件夹。

我不会分享我正在进行的所有预处理,因为它包含对整个事物的形态分析(不包括R)。

这是我的R代码:

library(tm)
library(topicmodels)
library(SnowballC)

asdf <- Corpus(DirSource("/path/to/20news-bydate/train",encoding="UTF-8"),readerControl=list(language="en"))
asdf <- tm_map(asdf, content_transformer(tolower))
asdf <- tm_map(asdf, removeWords, stopwords(kind="english"))
asdf <- tm_map(asdf, removePunctuation)
asdf <- tm_map(asdf, removeNumbers)
asdf <- tm_map(asdf, stripWhitespace)  
# until here: preprocessing


# building DocumentTermMatrix with term frequency
dtm <- DocumentTermMatrix(asdf, control=list(weighting=weightTf))


# building a matrix from the DTM and wordvector (all words as titles, 
# sorted by frequency in corpus) and wordlengths (length of actual 
# wordstrings in the wordvector)
m <- as.matrix(dtm)
wordvector <- sort(colSums(m),decreasing=T)
wordlengths <- nchar(names(wordvector))

names(wordvector[wordlengths>22]) -> mystops1  # all words longer than 22 characters
names(wordvector)[wordvector<3] -> mystops2 # all words with occurence <3
mystops <- c(mystops1,mystops2) # the stopwordlist

# going back to the corpus to remove the chosen words
asdf <- tm_map(asdf, removeWords ,mystops) 

这是我收到错误的地方。

2 个答案:

答案 0 :(得分:2)

正如我在评论中所怀疑的那样:来自removeWords包的tm使用perl正则表达式。所有单词都使用或|管道连接在一起。在您的情况下,结果字符串包含太多字符:

  

gsub中的错误(正则表达式,“”,txt,perl = TRUE):常规无效   表达   “(* UCP)\ B(zxmkrstudservzdvunituebingende | zxmkrstudservzdvunituebingende | ... | unwantingly |   另外:警告信息:在gsub中(正则表达式,“”,txt,perl = TRUE):   PCRE模式编译错误'正则表达式太大'了   ''

一种解决方案:定义自己的removeWords函数,该函数在字符限制时拆分一个过大的正则表达式,然后单独应用每个拆分正则表达式,使其不再达到限制:< / p>

f <- content_transformer({function(txt, words, n = 30000L) {
  l <- cumsum(nchar(words)+c(0, rep(1, length(words)-1)))
  groups <- cut(l, breaks = seq(1,ceiling(tail(l, 1)/n)*n+1, by = n))
  regexes <- sapply(split(words, groups), function(words) sprintf("(*UCP)\\b(%s)\\b", paste(sort(words, decreasing = TRUE), collapse = "|")))
  for (regex in regexes)  txt <- gsub(regex, "", txt, perl = TRUE)
  return(txt)
}})
asdf <- tm_map(asdf, f, mystops) 

答案 1 :(得分:1)

您的自定义停用词太大,因此您必须将其分解:

group <- 100
n <- length(myStopwords)
r <- rep(1:ceiling(n/group),each=group)[1:n]
d <- split(myStopwords,r)

for (i in 1:length(d)) {
  asdf <- removeWords(asdf, c(paste(d[[i]])))
 }