R-Project没有适用于'meta'的方法应用于类“character”的对象

时间:2014-07-16 02:15:48

标签: r text-mining tm

我正在尝试运行此代码(Ubuntu 12.04,R 3.1.1)

# Load requisite packages
library(tm)
library(ggplot2)
library(lsa)

# Place Enron email snippets into a single vector.
text <- c(
  "To Mr. Ken Lay, I’m writing to urge you to donate the millions of dollars you made from selling Enron stock before the company declared bankruptcy.",
  "while you netted well over a $100 million, many of Enron's employees were financially devastated when the company declared bankruptcy and their retirement plans were wiped out",
  "you sold $101 million worth of Enron stock while aggressively urging the company’s employees to keep buying it",
  "This is a reminder of Enron’s Email retention policy. The Email retention policy provides as follows . . .",
  "Furthermore, it is against policy to store Email outside of your Outlook Mailbox and/or your Public Folders. Please do not copy Email onto floppy disks, zip disks, CDs or the network.",
  "Based on our receipt of various subpoenas, we will be preserving your past and future email. Please be prudent in the circulation of email relating to your work and activities.",
  "We have recognized over $550 million of fair value gains on stocks via our swaps with Raptor.",
  "The Raptor accounting treatment looks questionable. a. Enron booked a $500 million gain from equity derivatives from a related party.",
  "In the third quarter we have a $250 million problem with Raptor 3 if we don’t “enhance” the capital structure of Raptor 3 to commit more ENE shares.")
view <- factor(rep(c("view 1", "view 2", "view 3"), each = 3))
df <- data.frame(text, view, stringsAsFactors = FALSE)

# Prepare mini-Enron corpus
corpus <- Corpus(VectorSource(df$text))
corpus <- tm_map(corpus, tolower)
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, function(x) removeWords(x, stopwords("english")))
corpus <- tm_map(corpus, stemDocument, language = "english")
corpus # check corpus

# Mini-Enron corpus with 9 text documents

# Compute a term-document matrix that contains occurrance of terms in each email
# Compute distance between pairs of documents and scale the multidimentional semantic space (MDS) onto two dimensions
td.mat <- as.matrix(TermDocumentMatrix(corpus))
dist.mat <- dist(t(as.matrix(td.mat)))
dist.mat  # check distance matrix

# Compute distance between pairs of documents and scale the multidimentional semantic space onto two dimensions
fit <- cmdscale(dist.mat, eig = TRUE, k = 2)
points <- data.frame(x = fit$points[, 1], y = fit$points[, 2])
ggplot(points, aes(x = x, y = y)) + geom_point(data = points, aes(x = x, y = y, color = df$view)) + geom_text(data = points, aes(x = x, y = y - 0.2, label = row.names(df)))

但是,当我运行它时,我收到此错误(在td.mat <- as.matrix(TermDocumentMatrix(corpus))行中):

Error in UseMethod("meta", x) : 
  no applicable method for 'meta' applied to an object of class "character"
In addition: Warning message:
In mclapply(unname(content(x)), termFreq, control) :
  all scheduled cores encountered errors in user code

我不知道该看什么 - 所有模块都加载了。

4 个答案:

答案 0 :(得分:88)

最新版本的tm(0.60)使得您不能再使用tm_map的函数来处理简单的字符值。所以问题是你的tolower步骤,因为那不是“规范”转换(参见getTransformations())。只需用

替换它
corpus <- tm_map(corpus, content_transformer(tolower))

content_transformer函数包装器会将所有内容转换为语料库中正确的数据类型。您可以将content_transformer与任何旨在操纵字符向量的函数一起使用,以便它可以在tm_map管道中工作。

答案 1 :(得分:29)

这有点旧,但仅仅是为了以后谷歌搜索的目的:还有一个替代解决方案。在corpus <- tm_map(corpus, tolower)之后,您可以使用corpus <- tm_map(corpus, PlainTextDocument)将其打回正确的数据类型。

答案 2 :(得分:1)

我有同样的问题,最后找到了解决方案:

在对其进行转换后,语料库对象中的 meta 信息似乎已损坏。

我所做的只是在完成准备之后,在流程的最后再次创建语料库。为了克服其他问题,我还写了一个循环,以便将文本复制回我的数据框:

a<- list()
for (i in seq_along(corpus)) {
    a[i] <- gettext(corpus[[i]][[1]]) #Do not use $content here!
}

df$text <- unlist(a) 
corpus <- Corpus(VectorSource(df$text)) #This action restores the corpus.

答案 3 :(得分:0)

文本操作的顺序。您应该在删除标点符号之前删除停用词。

我使用以下内容准备文本。我的文本包含在cleanData $ LikeMost。

有时,根据来源,您首先需要以下内容:

textData$LikeMost <- iconv(textData$LikeMost, to = "utf-8")

一些停用词很重要,因此您可以创建一个修订版。

#create revised stopwords list
newWords <- stopwords("english")
keep <- c("no", "more", "not", "can't", "cannot", "isn't", "aren't", "wasn't",
          "weren't", "hasn't", "haven't", "hadn't", "doesn't", "don't", "didn't", "won't")


newWords <- newWords [! newWords %in% keep]

然后,您可以运行您的tm函数:

like <- Corpus(VectorSource(cleanData$LikeMost))
like <- tm_map(like,PlainTextDocument)
like <- tm_map(like, removeWords, newWords)
like <- tm_map(like, removePunctuation)
like <- tm_map(like, removeNumbers)
like <- tm_map(like, stripWhitespace)