我想创建一个基于特定网页至少出现两次的单词列表。 我成功获取数据并获得每个单词的计数列表但是 我需要保留具有大写字母的单词以保持这种方式。现在代码生成仅包含小写的单词列表。 例如,“迈阿密”这个词变成了“迈阿密”,而我需要它作为“迈阿密”。
我怎样才能得到原始结构中的单词?
附件是代码:
library(XML)
web_page <- htmlTreeParse("http://www.larryslist.com/artmarket/the-talks/dennis-scholls-multiple-roles-from-collecting-art-to-winning-emmy-awards/"
,useInternal = TRUE)
doctext = unlist(xpathApply(web_page, '//p', xmlValue))
doctext = gsub('\\n', ' ', doctext)
doctext = paste(doctext, collapse = ' ')
library(tm)
SampCrps<- Corpus(VectorSource(doctext))
corp <- tm_map(SampCrps, PlainTextDocument)
oz <- tm_map(corp, removePunctuation, preserve_intra_word_dashes = FALSE) # remove punctuation
oz <- tm_map(corp, removeWords, stopwords("english")) # remove stopwords
dtm <-DocumentTermMatrix(oz)
findFreqTerms(dtm,2) # words that apear at least 2 times
dtmMatrix <- as.matrix(dtm)
wordsFreq <- colSums(dtmMatrix)
wordsFreq <- sort(wordsFreq, decreasing=TRUE)
head(wordsFreq)
wordsFreq <- as.data.frame(wordsFreq)
wordsFreq <- data.frame(word = rownames(wordsFreq), count = wordsFreq, row.names = NULL)
head(wordsFreq,50)
当我使用这行代码获得三个字ngram时会出现同样的问题:
library(RWeka)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 3, max = 3))
tdm <- TermDocumentMatrix(oz, control = list(tokenize = BigramTokenizer))
inspect(tdm)
答案 0 :(得分:2)
问题在于,默认情况下,DocumentTermMatrix()
中有一个选项会降低您的条款。关闭它,你将保留案例。
dtm <- DocumentTermMatrix(oz, control = list(tolower = FALSE))
colnames(dtm)[grep(".iami", colnames(dtm))]
## [1] "Miami" "Miami," "Miami." "Miami’s"
使用 quanteda 包的另一种方法可能更简单:
require(quanteda)
# straight from text to the matrix
dfmMatrix <- dfm(doctext, removeHyphens = TRUE, toLower = FALSE,
ignoredFeatures = stopwords("english"), verbose = FALSE)
# gets frequency counts, sorted in descending order of total term frequency
termfreqs <- topfeatures(dfmMatrix, n = nfeature(dfmMatrix))
# remove those with frequency < 2
termfreqs <- termfreqs[termfreqs >= 2]
head(termfreqs, 20)
## art I artists collecting work We collection collectors
## 35 29 19 17 15 14 13 12
## What contemporary The world us It Miami one
## 11 10 10 10 10 9 9 8
## always many make Art
## 8 8 8 7
我们可以看到“迈阿密”(例如)的情况得以保留:
termfreqs[grep(".iami", names(termfreqs))]
## Miami Miami’s
## 9 2