我正在尝试使用预先标识的术语构建文档术语矩阵。语料库在变量cname中标识,具有预先标识的术语的文件被读入术语变量,然后将其转换为列表。当我运行下面的代码时,我得到一个空的DTM。我在下面使用的代码。关于我做错了什么的任何想法?谢谢!!!
汤姆
library(tm)
library(Rmpfr)
library(stm)
#Loading Documents
cname <- file.path("", "corpus", "goodsmoklss")
library(tm)
corp <- VCorpus(DirSource(cname))
#Transformations
docs<-tm_map(corp,tolower) #AllLowerCase
docs<-tm_map(corp,removeNumbers) #RemoveNumbers
#Remove Stopwords like is, was, the etc
docs<-tm_map(corp, removeWords, stopwords("english"))
#make Sure it is a PLainTextDocument
documents<-tm_map(docs,PlainTextDocument)
#read in list of preidentified terms
terms=read.delim("C:/corpus/TermList.csv", header=F, stringsAsFactor=F)
tokenizing.phrases <- c(terms)
library("RWeka")
phraseTokenizer <- function(x) {
require(stringr)
x <- as.character(x) # extract the plain text from TextDocument object
x <- str_trim(x)
if (is.na(x)) return("")
phrase.hits <- str_detect(x, coll(tokenizing.phrases))
if (any(phrase.hits)) {
# only split once on the first hit, so we don't have to worry about #multiple occurences of the same phrase
split.phrase <- tokenizing.phrases[which(phrase.hits)[1]]
#warning(paste("split phrase:", split.phrase))
temp <- unlist(str_split(x, coll(split.phrase), 2))
out <- c(phraseTokenizer(temp[1]), split.phrase, phraseTokenizer(temp[2]))
} else {
#out <- MC_tokenizer(x)
out <- " "
}
# get rid of any extraneous empty strings, which can happen if a phrase occurs just before a punctuation
out[out != ""]
}
dtm <- DocumentTermMatrix(documents, control = list(tokenize = phraseTokenizer))
答案 0 :(得分:0)
我对TM并不熟悉,但在quanteda包中你可以简单地进行子集或过滤。这里应该适用同样的原则。我认为您应该能够构建DTM,然后根据您感兴趣的术语向量进行过滤。首先按上述方法制作DTM。
v <- ("your","terms","here")
to_filter <- colnames(dtm)
#then you can simply filter based on the vector
dtm2 <- dtm[,to_filter %in% v]
您可能需要先考虑先截断字典和语料库。如果语料库有很多术语和文档内存可能是一个问题。