基于问题More efficient means of creating a corpus and DTM我已经准备了自己的方法,用于从大型语料库构建术语文档矩阵,(我希望)不需要术语x文档内存。
sparseTDM <- function(vc){
id = unlist(lapply(vc, function(x){x$meta$id}))
content = unlist(lapply(vc, function(x){x$content}))
out = strsplit(content, "\\s", perl = T)
names(out) = id
lev.terms = sort(unique(unlist(out)))
lev.docs = id
v1 = lapply(
out,
function(x, lev) {
sort(as.integer(factor(x, levels = lev, ordered = TRUE)))
},
lev = lev.terms
)
v2 = lapply(
seq_along(v1),
function(i, x, n){
rep(i,length(x[[i]]))
},
x = v1,
n = names(v1)
)
stm = data.frame(i = unlist(v1), j = unlist(v2)) %>%
group_by(i, j) %>%
tally() %>%
ungroup()
tmp = simple_triplet_matrix(
i = stm$i,
j = stm$j,
v = stm$n,
nrow = length(lev.terms),
ncol = length(lev.docs),
dimnames = list(Terms = lev.terms, Docs = lev.docs)
)
as.TermDocumentMatrix(tmp, weighting = weightTf)
}
计算v1
时速度变慢。它运行了30分钟,我停了下来。
我准备了一个小例子:
b = paste0("string", 1:200000)
a = sample(b,80)
microbenchmark(
lapply(
list(a=a),
function(x, lev) {
sort(as.integer(factor(x, levels = lev, ordered = TRUE)))
},
lev = b
)
)
结果是:
Unit: milliseconds
expr min lq mean median uq max neval
... 25.80961 28.79981 31.59974 30.79836 33.02461 98.02512 100
Id和内容有126522个元素,Lev.terms有155591个元素,所以看起来我已经过早地停止了处理。由于最终我将处理大约6M文档,我需要问......有没有办法加速这段代码?
答案 0 :(得分:1)
现在我已加快取代
了sort(as.integer(factor(x, levels = lev, ordered = TRUE)))
与
ind = which(lev %in% x)
cnt = as.integer(factor(x, levels = lev[ind], ordered = TRUE))
sort(ind[cnt])
现在时间安排:
expr min lq mean median uq max neval
... 5.248479 6.202161 6.892609 6.501382 7.313061 10.17205 100
答案 1 :(得分:1)
我在创建quanteda::dfm()
时经历了多次解决问题的迭代(请参阅GitHub repo here),到目前为止,最快的解决方案涉及使用data.table
和Matrix
包索引文档和标记化的特征,计算文档中的特征,并将结果直接插入稀疏矩阵,如下所示:
require(data.table)
require(Matrix)
dfm_quanteda <- function(x) {
docIndex <- 1:length(x)
if (is.null(names(x)))
names(docIndex) <- factor(paste("text", 1:length(x), sep="")) else
names(docIndex) <- names(x)
alltokens <- data.table(docIndex = rep(docIndex, sapply(x, length)),
features = unlist(x, use.names = FALSE))
alltokens <- alltokens[features != ""] # if there are any "blank" features
alltokens[, "n":=1L]
alltokens <- alltokens[, by=list(docIndex,features), sum(n)]
uniqueFeatures <- unique(alltokens$features)
uniqueFeatures <- sort(uniqueFeatures)
featureTable <- data.table(featureIndex = 1:length(uniqueFeatures),
features = uniqueFeatures)
setkey(alltokens, features)
setkey(featureTable, features)
alltokens <- alltokens[featureTable, allow.cartesian = TRUE]
alltokens[is.na(docIndex), c("docIndex", "V1") := list(1, 0)]
sparseMatrix(i = alltokens$docIndex,
j = alltokens$featureIndex,
x = alltokens$V1,
dimnames=list(docs=names(docIndex), features=uniqueFeatures))
}
require(quanteda)
str(inaugTexts)
## Named chr [1:57] "Fellow-Citizens of the Senate and of the House of Representatives:\n\nAmong the vicissitudes incident to life no event could ha"| __truncated__ ...
## - attr(*, "names")= chr [1:57] "1789-Washington" "1793-Washington" "1797-Adams" "1801-Jefferson" ...
tokenizedTexts <- tokenize(toLower(inaugTexts), removePunct = TRUE, removeNumbers = TRUE)
system.time(dfm_quanteda(tokenizedTexts))
## user system elapsed
## 0.060 0.005 0.064
当然这只是一个片段,但在GitHub仓库(dfm-main.R
)上很容易找到完整的源代码。
我还鼓励您使用包中的完整dfm()
。您可以使用以下命令从CRAN或开发版本安装它:
devtools::install_github("kbenoit/quanteda")
关于你的文本,看看它在性能方面是如何运作的。
答案 2 :(得分:0)
您是否尝试过试用sort method (algorithm)并指定quicksort或shell排序?
类似的东西:
sort(as.integer(factor(x, levels = lev, ordered = TRUE)), method=shell)
或:
sort(as.integer(factor(x, levels = lev, ordered = TRUE)), method=quick)
此外,如果排序算法一次又一次地重复执行这些步骤,您可以尝试使用一些中间变量来评估嵌套函数:
foo<-factor(x, levels = lev, ordered = TRUE)
bar<-as.integer(foo)
sort(bar, method=quick)
或
sort(bar)
祝你好运!