如何在角色向量中找到频繁的相邻单词对?例如,使用原油数据集,一些常见货币对是原油","石油市场"和"百万桶"。
下面的小例子的代码试图识别频繁的术语,然后使用正向前瞻断言,计算频繁术语紧跟这些频繁术语的次数。但是这次尝试坠毁并烧毁了。
对于如何创建在第一列(" Pairs")公共对和第二列(" Count")中显示的数据框,我们将不胜感激。他们出现在文本中的次数。
library(qdap)
library(tm)
# from the crude data set, create a text file from the first three documents, then clean it
text <- c(crude[[1]][1], crude[[2]][1], crude[[3]][1])
text <- tolower(text)
text <- tm::removeNumbers(text)
text <- str_replace_all(text, " ", "") # replace double spaces with single space
text <- str_replace_all(text, pattern = "[[:punct:]]", " ")
text <- removeWords(text, stopwords(kind = "SMART"))
# pick the top 10 individual words by frequency, since they will likely form the most common pairs
freq.terms <- head(freq_terms(text.var = text), 10)
# create a pattern from the top words for the regex expression below
freq.terms.pat <- str_c(freq.terms$WORD, collapse = "|")
# match frequent terms that are followed by a frequent term
library(stringr)
pairs <- str_extract_all(string = text, pattern = "freq.terms.pat(?= freq.terms.pat)")
这是努力动摇的地方。
不了解Java或Python,这些对Java count word pairs Python count word pairs没有帮助,但它们可能对其他人有用。
谢谢。
答案 0 :(得分:3)
首先,修改您的初始text
列表:
text <- c(crude[[1]][1], crude[[2]][2], crude[[3]][3])
为:
text <- c(crude[[1]][1], crude[[2]][1], crude[[3]][1])
然后,您可以继续进行文本清理(请注意,您的方法会创建格式错误的单词,如"oilcanadian"
,但这对于手头的示例就足够了):
text <- tolower(text)
text <- tm::removeNumbers(text)
text <- str_replace_all(text, " ", "")
text <- str_replace_all(text, pattern = "[[:punct:]]", " ")
text <- removeWords(text, stopwords(kind = "SMART"))
建立一个新的语料库:
v <- Corpus(VectorSource(text))
创建一个bigram tokenizer函数:
BigramTokenizer <- function(x) {
unlist(
lapply(ngrams(words(x), 2), paste, collapse = " "),
use.names = FALSE
)
}
使用控制参数TermDocumentMatrix
创建tokenize
:
tdm <- TermDocumentMatrix(v, control = list(tokenize = BigramTokenizer))
现在你有了新的tdm
,以获得所需的输出,你可以这样做:
library(dplyr)
data.frame(inspect(tdm)) %>%
add_rownames() %>%
mutate(total = rowSums(.[,-1])) %>%
arrange(desc(total))
给出了:
#Source: local data frame [272 x 5]
#
# rowname X1 X2 X3 total
#1 crude oil 2 0 1 3
#2 mln bpd 0 3 0 3
#3 oil prices 0 3 0 3
#4 cut contract 2 0 0 2
#5 demand opec 0 2 0 2
#6 dlrs barrel 2 0 0 2
#7 effective today 1 0 1 2
#8 emergency meeting 0 2 0 2
#9 oil companies 1 1 0 2
#10 oil industry 0 2 0 2
#.. ... .. .. .. ...
答案 1 :(得分:1)
这里的一个想法是创建一个带有双字母的新语料库。
bigram或digram是一串标记中两个相邻元素的每个序列
提取bigram的递归函数:
bigram <-
function(xs){
if (length(xs) >= 2)
c(paste(xs[seq(2)],collapse='_'),bigram(tail(xs,-1)))
}
然后将其应用于tm
包中的原始数据。 (我在这里做了一些文字清理,但这个步骤取决于文字)。
res <- unlist(lapply(crude,function(x){
x <- tm::removeNumbers(tolower(x))
x <- gsub('\n|[[:punct:]]',' ',x)
x <- gsub(' +','',x)
## after cleaning a compute frequency using table
freqs <- table(bigram(strsplit(x," ")[[1]]))
freqs[freqs>1]
}))
as.data.frame(tail(sort(res),5))
tail(sort(res), 5)
reut-00022.xml.hold_a 3
reut-00022.xml.in_the 3
reut-00011.xml.of_the 4
reut-00022.xml.a_futures 4
reut-00010.xml.abdul_aziz 5
the bigrams&#34; abdul aziz&#34; &#34;期货&#34;是最常见的。您应该重新删除要移除的数据(of,the,..)。但这应该是一个好的开始。
OP评论后如果你想在所有语料库中获得bigrams频率,那么想法就是计算循环中的bigrams,然后计算循环结果的频率。我有利于添加更好的文本处理清理。
res <- unlist(lapply(crude,function(x){
x <- removeNumbers(tolower(x))
x <- removeWords(x, words=c("the","of"))
x <- removePunctuation(x)
x <- gsub('\n|[[:punct:]]',' ',x)
x <- gsub(' +','',x)
## after cleaning a compute frequency using table
words <- strsplit(x," ")[[1]]
bigrams <- bigram(words[nchar(words)>2])
}))
xx <- as.data.frame(table(res))
setDT(xx)[order(Freq)]
# res Freq
# 1: abdulaziz_bin 1
# 2: ability_hold 1
# 3: ability_keep 1
# 4: ability_sell 1
# 5: able_hedge 1
# ---
# 2177: last_month 6
# 2178: crude_oil 7
# 2179: oil_minister 7
# 2180: world_oil 7
# 2181: oil_prices 14