如果我有一个字符串:
moon <- "The cow jumped over the moon with a silver plate in its mouth"
有没有办法可以提取"moon"
附近的单词。邻居可能是“月亮”周围的2或3个单词。
所以,如果我的
"The cow jumped over the moon with a silver plate in its mouth"
我希望我的输出只是:
"jumped over the moon with a silver"
我知道如果我想通过字符提取我可以使用str_locate
,但不知道如何使用“单词”来完成它。这可以在R?
谢谢&amp;问候, 西马克
答案 0 :(得分:4)
使用strsplit
:
x <- strsplit(str, " ")[[1]]
i <- which(x == "moon")
paste(x[seq(max(1, (i-2)), min((i+2), length(x)))], collapse= " ")
答案 1 :(得分:4)
我是这样做的:
keyword <- "moon"
lookaround <- 2
pattern <- paste0("([[:alpha:]]+ ){0,", lookaround, "}", keyword,
"( [[:alpha:]]+){0,", lookaround, "}")
regmatches(str, regexpr(pattern, str))[[1]]
# [1] "The cow jumped over"
想法:搜索任意字符后跟空格至少出现0次,最多出现“环视”(此处为2次),然后是“关键字”(此处为“月亮”) “),然后是空间和一堆字符模式在0和”环视“时间之间重复。 regexpr
函数给出了这种模式的开始和结束。 regmatches
包装此函数然后从此开始/停止位置获取子字符串。
注意:如果您要搜索同一模式的多次出现,regexpr
可以替换为gregexpr
。
str <- "The cow jumped over the moon with a silver plate in its mouth"
ll <- rep(str, 1e5)
hong <- function(str) {
str <- strsplit(str, " ")
sapply(str, function(y) {
i <- which(y=="moon")
paste(y[seq(max(1, (i-2)), min((i+2), length(y)))], collapse= " ")
})
}
arun <- function(str) {
keyword <- "moon"
lookaround <- 2
pattern <- paste0("([[:alpha:]]+ ){0,", lookaround, "}", keyword,
"( [[:alpha:]]+){0,", lookaround, "}")
regmatches(str, regexpr(pattern, str))
}
require(microbenchmark)
microbenchmark(t1 <- hong(ll), t2 <- arun(ll), times=10)
# Unit: seconds
# expr min lq median uq max neval
# t1 <- hong(ll) 6.172986 6.384981 6.478317 6.654690 7.193329 10
# t2 <- arun(ll) 1.175950 1.192455 1.200674 1.227279 1.326755 10
identical(t1, t2) # [1] TRUE
答案 2 :(得分:2)
这是使用tm
套餐的方法(当你所有人都拿到锤子时......)
moon <- "The cow jumped over the moon with a silver plate in its mouth"
require(tm)
my.corpus <- Corpus(VectorSource(moon))
# Tokenizer for n-grams and passed on to the term-document matrix constructor
library(RWeka)
neighborhood <- 3 # how many words either side of word of interest
neighborhood1 <- 2 + neighborhood * 2
ngramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = neighborhood1, max = neighborhood1))
dtm <- TermDocumentMatrix(my.corpus, control = list(tokenize = ngramTokenizer))
inspect(dtm)
# find ngrams that have the word of interest in them
word <- 'moon'
subset_ngrams <- dtm$dimnames$Terms[grep(word, dtm$dimnames$Terms)]
# keep only ngrams with the word of interest in the middle. This
# removes duplicates and lets us see what's on either side
# of the word of interest
subset_ngrams <- subset_ngrams[sapply(subset_ngrams, function(i) {
tmp <- unlist(strsplit(i, split=" "))
tmp <- tmp[length(tmp) - span]
tmp} == word)]
# inspect output
subset_ngrams
[1] "jumped over the moon with a silver plate"