我有两个字符串:
a <- "Roy lives in Japan and travels to Africa"
b <- "Roy travels Africa with this wife"
我希望得到这些字符串之间常用词的数量。
答案应该是3。
&#34;罗伊&#34;
&#34;行进&#34;
是常用词
这就是我的尝试:
stra <- as.data.frame(t(read.table(textConnection(a), sep = " ")))
strb <- as.data.frame(t(read.table(textConnection(b), sep = " ")))
采取独特的措施以避免重复计算
stra_unique <-as.data.frame(unique(stra$V1))
strb_unique <- as.data.frame(unique(strb$V1))
colnames(stra_unique) <- c("V1")
colnames(strb_unique) <- c("V1")
common_words <-length(merge(stra_unique,strb_unique, by = "V1")$V1)
我需要这个用于超过2000和1200字符串的数据集。 我必须评估字符串的总时间是2000 X 1200.任何快速方式,不使用循环。
答案 0 :(得分:7)
您可以使用base
库中的strsplit
和intersect
:
> a <- "Roy lives in Japan and travels to Africa"
> b <- "Roy travels Africa with this wife"
> a_split <- unlist(strsplit(a, sep=" "))
> b_split <- unlist(strsplit(b, sep=" "))
> length(intersect(a_split, b_split))
[1] 3
答案 1 :(得分:6)
也许,使用intersect
和str_extract
对于multiple strings
,您可以将它们设为list
或vector
vec1 <- c(a,b)
Reduce(`intersect`,str_extract_all(vec1, "\\w+"))
#[1] "Roy" "travels" "Africa"
对于faster
选项,请考虑stringi
library(stringi)
Reduce(`intersect`,stri_extract_all_regex(vec1,"\\w+"))
#[1] "Roy" "travels" "Africa"
计算:
length(Reduce(`intersect`,stri_extract_all_regex(vec1,"\\w+")))
#[1] 3
或使用base R
Reduce(`intersect`,regmatches(vec1,gregexpr("\\w+", vec1)))
#[1] "Roy" "travels" "Africa"
答案 2 :(得分:2)
这种方法可推广到n个向量:
a <- "Roy lives in Japan and travels to Africa"
b <- "Roy travels Africa with this wife"
c <- "Bob also travels Africa for trips but lives in the US unlike Roy."
library(stringi);library(qdapTools)
X <- stri_extract_all_words(list(a, b, c))
X <- mtabulate(X) > 0
Y <- colSums(X) == nrow(X); names(Y)[Y]
[1] "Africa" "Roy" "travels"