语料库用短语构建

时间:2014-06-04 13:15:32

标签: r matrix tf-idf corpus phrase

我的文件是:

 doc1 = very good, very bad, you are great
 doc2 = very bad, good restaurent, nice place to visit

我想用,分隔我的语料库,以便我的最终DocumentTermMatrix成为:

      terms
 docs       very good      very bad        you are great   good restaurent   nice place to visit
  doc1       tf-idf          tf-idf         tf-idf          0                    0
  doc2       0                tf-idf         0                tf-idf             tf-idf

我知道,如何计算单个单词的DocumentTermMatrix但不知道如何在R中创建语料库separated for each phraseR中的解决方案是首选,但解决方法是Python也受到欢迎。

我试过的是:

> library(tm)
> library(RWeka)
> BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
> options(mc.cores=1)
> texts <- c("very good, very bad, you are great","very bad, good restaurent, nice place to visit")
> corpus <- Corpus(VectorSource(texts))
> a <- TermDocumentMatrix(corpus, control = list(tokenize = BigramTokenizer))
> as.matrix(a)

我得到了:

                         Docs
  Terms                   1 2
  bad good restaurent   0 1
  bad you are           1 0
  good restaurent nice  0 1
  good very bad         1 0
  nice place to         0 1
  place to visit        0 1
  restaurent nice place 0 1
  very bad good         0 1
  very bad you          1 0
  very good very        1 0
  you are great         1 0

我想要的不是单词的组合,而只是我在矩阵中显示的短语。

3 个答案:

答案 0 :(得分:1)

以下是使用qdap + tm个套餐的一种方法:

library(qdap); library(tm); library(qdapTools)

dat <- list2df(list(doc1 = "very good, very bad, you are great",
 doc2 = "very bad, good restaurent, nice place to visit"), "text", "docs")

x <- sub_holder(", ", dat$text)

m <- dtm(wfm(x$unhold(gsub(" ", "~~", x$output)), dat$docs) )
weightTfIdf(m)

inspect(weightTfIdf(m))

## A document-term matrix (2 documents, 5 terms)
## 
## Non-/sparse entries: 4/6
## Sparsity           : 60%
## Maximal term length: 19 
## Weighting          : term frequency - inverse document frequency (normalized) (tf-idf)
## 
##       Terms
## Docs   good restaurent nice place to visit very bad very good you are great
##   doc1       0.0000000           0.0000000        0 0.3333333     0.3333333
##   doc2       0.3333333           0.3333333        0 0.0000000     0.0000000

您也可以一举做出并返回DocumentTermMatrix,但这可能更难理解:

x <- sub_holder(", ", dat$text)

apply_as_tm(t(wfm(x$unhold(gsub(" ", "~~", x$output)), dat$docs)), 
    weightTfIdf, to.qdap=FALSE)

答案 1 :(得分:0)

如果您只是使用strsplit分隔逗号然后将您的短语转换为单个&#34;单词&#34;该怎么办?通过结合一些角色。例如

library(tm)
docs <- c(D1 = "very good, very bad, you are great", 
    D2 = "very bad, good restaurent, nice place to visit")

dd <- Corpus(VectorSource(docs))
dd <- tm_map(dd, function(x) {
    PlainTextDocument(
       gsub("\\s+","~",strsplit(x,",\\s*")[[1]]), 
       id=ID(x)
     )
})
inspect(dd)

# A corpus with 2 text documents
# 
# The metadata consists of 2 tag-value pairs and a data frame
# Available tags are:
#   create_date creator 
# Available variables in the data frame are:
#   MetaID 

# $D1
# very~good
# very~bad
# you~are~great
# 
# $D2
# very~bad
# good~restaurent
# nice~place~to~visit

dtm <- DocumentTermMatrix(dd, control = list(weighting = weightTfIdf))
as.matrix(dtm)

这将产生

# Docs good~restaurent nice~place~to~visit very~bad very~good you~are~great
#   D1       0.0000000           0.0000000        0 0.3333333     0.3333333
#   D2       0.3333333           0.3333333        0 0.0000000     0.0000000

答案 2 :(得分:0)

对于使用text2vec的任何人,这都是基于自定义词汇的非常方便的解决方案:

library(text2vec)
doc1 <- 'very good, very bad, you are great'
doc2 <- 'very bad, good restaurent, nice place to visit'
docs <- list(doc1, doc2)
docs <- sapply(docs, strsplit, split=', ')
vocab <- vocab_vectorizer(create_vocabulary(unique(unlist(docs))))
dtm <- create_dtm(itoken(docs), vocab)
dtm

这将导致:

2 x 5 sparse Matrix of class "dgCMatrix"
  very good very bad you are great good restaurent nice place to visit
1         1        1             1               .                   .
2         .        1             .               1                   1

这种方法允许在加载文件和准备词汇表时进行更多自定义。