R中的Wordcloud +语料库错误

时间:2014-11-25 15:27:14

标签: r twitter corpus

我想使用Wordcloud功能在twitter数据上做云。我已经安装了twitter包并使用了api。之后我会做以下事情。

bigdata <- searchTwitter("#bigdata", n=20)

bigdata_list <- sapply(bigdata, function(x) x$getText())
bigdata_corpus <- Corpus(VectorSource(bigdata_list))
bigdata_corpus <- tm_map(bigdata_corpus, content_transformer(tolower), lazy=TRUE)
bigdata_corpus <- tm_map(bigdata_corpus, removePunctuation, lazy=TRUE)
bigdata_corpus <- tm_map(bigdata_corpus, 
                           function(x)removeWords(x,stopwords()), lazy=TRUE)
wordcloud(bigdata_corpus)

这会产生Wordcloud命令的错误消息:

Error in UseMethod("meta", x) : 
  no applicable method for 'meta' applied to an object of class "try-error"
In addition: Warning messages:
1: In mclapply(x$content[i], function(d) tm_reduce(d, x$lazy$maps)) :
  all scheduled cores encountered errors in user code
2: In mclapply(unname(content(x)), termFreq, control) :
  all scheduled cores encountered errors in user code

我尝试了不同的语料库命令,但似乎无法正确使用它。 有什么想法吗?

1 个答案:

答案 0 :(得分:1)

你可以试试这个:

library("tm")
# Transform your corpus in a term document matrix
bigdata_tdm <- as.matrix(TermDocumentMatrix(bigdata_corpus))
# Get the frequency by words
bigdata_freq <- data.frame(Words = rownames(bigdata_tdm), Freq = rowSums(bigdata_tdm), stringsAsFactors = FALSE)
# sort
bigdata_freq <- bigdata_freq[order(bigdata_freq$Freq, decreasing = TRUE), ]
# keep the 50 most frequent words
bigdata_freq <- bigdata_freq[1:50, ]

# Draw the wordcloud
library("wordcloud")
wordcloud(words = bigdata_freq$Words, freq = bigdata_freq$Freq)

使用tm_0.6wordcloud_2.5两种方式都有效。