我试图通过R中的三元组生成所有unigrams的列表,最终创建一个包含所有单个单词,bigrams和trigrams的列的文档短语矩阵。
我希望找到一个简单的包装,但没有成功。我最终得到了RWeka,下面的代码和输出,但不幸的是,这种方法会丢弃所有2或1个字符的unigrams。
这可以修复,还是人们知道另一条路?谢谢!
TrigramTokenizer <- function(x) NGramTokenizer(x,
Weka_control(min = 1, max = 3))
Text = c( "Ab Hello world","Hello ab", "ab" )
tt = Corpus(VectorSource(Text))
tdm <- TermDocumentMatrix( tt,
control = list(tokenize = TrigramTokenizer))
inspect(tdm)
# <<TermDocumentMatrix (terms: 6, documents: 3)>>
# Non-/sparse entries: 7/11
# Sparsity : 61%
# Maximal term length: 14
# Weighting : term frequency (tf)
# Docs
# Terms 1 2 3
# ab hello 1 0 0
# ab hello world 1 0 0
# hello 1 1 0
# hello ab 0 1 0
# hello world 1 0 0
# world 1 0 0
以下是来自下方的ngram()版本,针对最优性进行了编辑(我认为)。基本上,当include.all = TRUE时,我试图重用令牌字符串以摆脱双循环。
ngram <- function(tokens, n = 2, concatenator = "_", include.all = FALSE) {
M = length(tokens)
stopifnot( n > 0 )
# if include.all=FALSE return null if nothing to report due to short doc
if ( ( M == 0 ) || ( !include.all && M < n ) ) {
return( c() )
}
# bail if just want original tokens or if we only have one token
if ( (n == 1) || (M == 1) ) {
return( tokens )
}
# set max size of ngram at max length of tokens
end <- min( M-1, n-1 )
all_ngrams <- c()
toks = tokens
for (width in 1:end) {
if ( include.all ) {
all_ngrams <- c( all_ngrams, toks )
}
toks = paste( toks[1:(M-width)], tokens[(1+width):M], sep=concatenator )
}
all_ngrams <- c( all_ngrams, toks )
all_ngrams
}
ngram( c("A","B","C","D"), n=3, include.all=TRUE )
ngram( c("A","B","C","D"), n=3, include.all=FALSE )
ngram( c("A","B","C","D"), n=10, include.all=FALSE )
ngram( c("A","B","C","D"), n=10, include.all=TRUE )
# edge cases
ngram( c(), n=3, include.all=TRUE )
ngram( "A", n=0, include.all=TRUE )
ngram( "A", n=3, include.all=TRUE )
ngram( "A", n=3, include.all=FALSE )
ngram( "A", n=1, include.all=FALSE )
ngram( "A", n=1, include.all=TRUE )
ngram( c("A","B"), n=1, include.all=FALSE )
ngram( c("A","B"), n=1, include.all=TRUE )
ngram( c("A","B","C"), n=1, include.all=FALSE )
ngram( c("A","B","C"), n=1, include.all=TRUE )
答案 0 :(得分:5)
你很幸运,这个:quanteda有一个软件包。
# or: devtools::install_github("kbenoit/quanteda")
require(quanteda)
Text <- c("Ab Hello world", "Hello ab", "ab")
dfm(Text, ngrams = 1:3, verbose = FALSE)
## Document-feature matrix of: 3 documents, 7 features.
## 3 x 7 sparse Matrix of class "dfmSparse"
## features
## docs ab ab_hello ab_hello_world hello hello_ab hello_world world
## text1 1 1 1 1 0 1 1
## text2 1 0 0 1 1 0 0
## text3 1 0 0 0 0 0 0
这会创建一个文档功能矩阵,其中“功能”是低级的unigrams,bigrams和trigrams。如果您喜欢单词之间的空格,只需将参数concatenator = " "
添加到dfm()
来电。
问题解决了,不需要Weka。
对于好奇,这里是创建 n -grams的主力函数,其中tokens
是一个字符向量(来自一个单独的标记化器):
ngram <- function(tokens, n = 2, concatenator = "_", include.all = FALSE) {
# start with lower ngrams, or just the specified size if include.all = FALSE
start <- ifelse(include.all,
1,
ifelse(length(tokens) < n, 1, n))
# set max size of ngram at max length of tokens
end <- ifelse(length(tokens) < n, length(tokens), n)
all_ngrams <- c()
# outer loop for all ngrams down to 1
for (width in start:end) {
new_ngrams <- tokens[1:(length(tokens) - width + 1)]
# inner loop for ngrams of width > 1
if (width > 1) {
for (i in 1:(width - 1))
new_ngrams <- paste(new_ngrams,
tokens[(i + 1):(length(tokens) - width + 1 + i)],
sep = concatenator)
}
# paste onto previous results and continue
all_ngrams <- c(all_ngrams, new_ngrams)
}
all_ngrams
}
答案 1 :(得分:2)
Oopse。事实证明,您可以通过控制来执行此操作。调用termFreq
方法,您可以向其传递选项,例如要使用的标记生成器(如上所述)以及要执行的清理操作。
所以调整的方法有效:
TrigramTokenizer <- function(x) NGramTokenizer(x,
Weka_control(min = 1, max = 3))
Text = c( "Ab Hello world","Hello ab", "ab" )
tt = Corpus(VectorSource(Text))
tdm <- TermDocumentMatrix( tt,
control = list(wordLengths=c(1,Inf), tokenize = TrigramTokenizer))
inspect(tdm)
给
<<TermDocumentMatrix (terms: 7, documents: 3)>>
Non-/sparse entries: 10/11
Sparsity : 52%
Maximal term length: 14
Weighting : term frequency (tf)
Docs
Terms 1 2 3
ab 1 1 1
ab hello 1 0 0
ab hello world 1 0 0
hello 1 1 0
hello ab 0 1 0
hello world 1 0 0
world 1 0 0