计算r中文档集合的余弦和jaccard相似度

时间:2017-11-23 09:57:33

标签: r text-mining cosine-similarity

我将计算近1.4万份文件之间的相似度。但是代码花费了太多时间来执行。有没有其他方法可以更快地完成同样的工作?

这是我的代码

wb=createWorkbook() #create workbook
addWorksheet(wb,"absSim") #create worksheet
listoffiles=list.files() #get list of documents from current working directory
fileslength=length(listoffiles) #no of documents in directory
for(i in 1:fileslength-1)
{
  d1=readLines(listoffiles[i])# read first document 
  k=i+1
  for(j in k:fileslength)
  {
   d2=readLines(listoffiles[j]) #read second document
   #make a vector of two documents
   myvector=c(d1,d2)
   #making corpus of two documents
   mycorpus=Corpus(VectorSource(myvector))
   #preprocessing of corpus
   mycorpus=tm_map(mycorpus,removePunctuation)
   mycorpus=tm_map(mycorpus,removeNumbers)
   mycorpus=tm_map(mycorpus,stripWhitespace)
   mycorpus=tm_map(mycorpus,tolower)
   mycorpus=tm_map(mycorpus,function(x) removeWords(x,stopwords("english")))
   mycorpus=tm_map(mycorpus,function(x) removeWords(x,"x"))
   #make a document term matrix now
   dtm=as.matrix(DocumentTermMatrix(mycorpus))
   #compute distance of both documents using proxy package
   cdist=as.matrix(dist(dtm,method = "cosine"))
   jdist=as.matrix(dist(dtm,method = "jaccard"))
   #compute similarity
   csim=1-cdist
   jsim=1-jdist
   #get similarity of both documents
   cos=csim[1,2]
   jac=jsim[1,2]
   if(cos>0 | jac>0)
   {
     writeData(wb,"absSim",cos,startCol = 1,startRow = rownum)
     writeData(wb,"absSim",jac,startCol = 2,startRow = rownum)
     saveWorkbook(wb,"abstractSimilarity.xlsx",overwrite = TRUE)
     rownum=rownum+1
   }
  }
}

当我运行此代码时,第一个文档在2小时内执行。有没有想过更快地计算余弦和jaccard相似度?

1 个答案:

答案 0 :(得分:1)

您可以尝试以下代码。这是一个非常简化的版本,没有任何清理或修剪只是为了演示如何使用text2vec。我还使用tokenizers包进行标记化,因为它比text2vec中的标记化器快一点。我使用Zachthis question/answer提供的抽样函数。在我的机器上它在不到一分钟的时间内完成。当然,其他相似性度量或预处理的集成也是可能的。我希望这就是你要找的东西。

library(text2vec)
library(tokenizers)

samplefun <- function(n, x, collapse){
  paste(sample(x, n, replace=TRUE), collapse=collapse)
}

words <- sapply(rpois(10000, 8) + 1, samplefun, letters, '')

#14000 documents, each with 100 lines (pasted together) of several words
docs <- sapply(1:14000, function(x) {

  paste(sapply(rpois(100, 5) + 1, samplefun, words, ' '), collapse = ". ")

})

iterator <- itoken(docs,
                   ,tokenizer = function(x) tokenizers::tokenize_words(x, lowercase = FALSE)
                   ,progressbar = FALSE
                   )

vocabulary <- create_vocabulary(iterator)

dtm <- create_dtm(iterator, vocab_vectorizer(vocabulary))

#dtm
#14000 x 10000 sparse Matrix of class "dgCMatrix"
#....

#use, e.g., the first and second half of the dtm as document sets
similarity <- sim2(dtm[1:(nrow(dtm)/2),]
                   , dtm[(nrow(dtm)/2+1):nrow(dtm),]
                   , method = "jaccard"
                   , norm = "none")

dim(similarity)
#[1] 7000 7000