如何用quanteda保持句子标记的开头和结尾

时间:2016-03-30 23:33:53

标签: r nlp text-mining tm quanteda

我正在尝试使用R的quanteda包创建3克。

我正在努力寻找一种方法来保留n-gram开头和结尾的句子标记,<s></s>,如下面的代码所示。

我认为使用与keptFeatures匹配的正则表达式的quanteda应该保留它们,但总是去除V形标记。

如何保持V形标记不被删除或用docfreq(mydfm)分隔句子开头和结尾的最佳方法是什么?

作为奖励问题,colSums(mydfm)优于Named num [1:n]的优势是什么,str(colSums(mydfm))和str(docfreq(mydfm))的结果几乎相同({{1}前者,Named int [1:n]后者)?

library(quanteda)
text <- "<s>I'm a sentence and I'd better be formatted properly!</s><s>I'm a second sentence</s>"

qc <- corpus(text)

mydfm  <- dfm(qc, ngram=3, removeNumbers = F, stem=T, keptFeatures="\\</?s\\>")

names(colSums(mydfm))

# Output:
# [1] "s_i'm_a"    "i'm_a_sentenc"    "a_sentenc_and"    "sentenc_and_i'd"
# [2] "and_i'd_better"   "i'd_better_be"    "better_be_format"   
# [3] "be_format_proper" "format_proper_s"  "proper_s_s"   "s_s_i'm"    
# [4] "i'm_a_second"   "a_second_sentenc"   "second_sentenc_s"

修改

将keepFeatures更正为在代码段中保留功能。

2 个答案:

答案 0 :(得分:2)

要返回简单的向量,只需取消列出tokenizedText" object returned from tokenize()(which is a specially classed list, with additional attributes). Here I used the what =“fasterword”which splits on "\\s" -- it's a tiny bit smarter than what =“fasterword”which splits on“”`。< / p>

# how to not remove the <s>, and return a vector 
unlist(toks <- tokenize(text, ngrams = 3, what = "fasterword"))
## [1] "<s>I'm_a_sentence"                "a_sentence_and"                  
## [3] "sentence_and_I'd"                 "and_I'd_better"                  
## [5] "I'd_better_be"                    "better_be_formatted"             
## [7] "be_formatted_properly!</s><s>I'm" "formatted_properly!</s><s>I'm_a" 
## [9] "properly!</s><s>I'm_a_second"     "a_second_sentence</s>" 

要将其保留在句子中,请将对象标记两次,第一次按句子标记,第二次按fasterword标记。

# keep it within sentence
(sents <- unlist(tokenize(text, what = "sentence")))
## [1] "<s>I'm a sentence and I'd better be formatted properly!"
## [2] "</s><s>I'm a second sentence</s>" 
tokenize(sents, ngrams = 3, what = "fasterword")
## tokenizedText object from 2 documents.
## Component 1 :
## [1] "<s>I'm_a_sentence"      "a_sentence_and"         "sentence_and_I'd"       "and_I'd_better"        
## [5] "I'd_better_be"          "better_be_formatted"    "be_formatted_properly!"
## 
## Component 2 :
## [1] "</s><s>I'm_a_second"   "a_second_sentence</s>"

要保留dfm中的V形标记,您可以通过tokenize()调用中上面使用的相同选项,因为dfm()调用tokenize()但具有不同的默认值 - 它使用了大多数用户可能想要的,而tokenize()更加保守。

# Bonus questions:
myDfm <- dfm(text, verbose = FALSE, what = "fasterword", removePunct = FALSE)
# "chevron" markers are not removed
features(myDfm)
## [1] "<s>i'm"              "a"                   "sentence"            "and"                 "i'd"                
## [6] "better"              "be"                  "formatted"           "properly!</s><s>i'm" "second"             
## [11] "sentence</s>" 

红利问题的最后一部分是docfreq()colSums()之间的区别。前者返回术语出现的文档计数,后者对列进行求和以获得跨文档的总术语频率。请参阅下文,"representatives"这两个词的不同之处。

# Difference between docfreq() and colSums():
myDfm2 <- dfm(inaugTexts[1:4], verbose = FALSE)
myDfm2[, "representatives"]
docfreq(myDfm2)["representatives"]
colSums(myDfm2)["representatives"]
## Document-feature matrix of: 4 documents, 1 feature.
## 4 x 1 sparse Matrix of class "dfmSparse"
##                  features
## docs              representatives
##   1789-Washington               2
##   1793-Washington               0
##   1797-Adams                    2
##   1801-Jefferson                0
docfreq(myDfm2)["representatives"]
## representatives 
##               2 
colSums(myDfm2)["representatives"]
## representatives 
##               4 

更新:quanteda v0.9.9中的某些命令和行为已更改:

返回一个简单的矢量,保留V形符号:

as.character(toks <- tokens(text, ngrams = 3, what = "fasterword"))
#  [1] "<s>I'm_a_sentence"                "a_sentence_and"                   "sentence_and_I'd"                
#  [4] "and_I'd_better"                   "I'd_better_be"                    "better_be_formatted"             
#  [7] "be_formatted_properly!</s><s>I'm" "formatted_properly!</s><s>I'm_a"  "properly!</s><s>I'm_a_second"    
# [10] "a_second_sentence</s>" 

保持在句子内:

(sents <- as.character(tokens(text, what = "sentence")))
# [1] "<s>I'm a sentence and I'd better be formatted properly!" "</s><s>I'm a second sentence</s>"                       
tokens(sents, ngrams = 3, what = "fasterword")
# tokens from 2 documents.
# Component 1 :
# [1] "<s>I'm_a_sentence"      "a_sentence_and"         "sentence_and_I'd"       "and_I'd_better"         "I'd_better_be"         
# [6] "better_be_formatted"    "be_formatted_properly!"
# 
# Component 2 :
# [1] "</s><s>I'm_a_second"   "a_second_sentence</s>"

奖金问题第1部分:

featnames(dfm(text, verbose = FALSE, what = "fasterword", removePunct = FALSE))
#  [1] "<s>i'm"              "a"                   "sentence"            "and"                 "i'd"                
#  [6] "better"              "be"                  "formatted"           "properly!</s><s>i'm" "second"             
# [11] "sentence</s>"

红利问题第2部分没有变化。

答案 1 :(得分:1)

这样的方法怎么样:

ngrams(
  tokenize(
    unlist(
      segment(text, what = "other", delimiter = "(?<=\\</s\\>)", perl = TRUE)),
    what = "fastestword", simplify = TRUE),
  n = 3L)

# [1] "<s>I'm_a_sentence"              "a_sentence_and"                
# [3] "sentence_and_I'd"               "and_I'd_better"                
# [5] "I'd_better_be"                  "better_be_formatted"           
# [7] "be_formatted_properly!</s>"     "formatted_properly!</s>_<s>I'm"
# [9] "properly!</s>_<s>I'm_a"         "<s>I'm_a_second"               
#[11] "a_second_sentence</s>"

或者,如果你不想要跨越句子界限的ngram:

unlist(
  ngrams(
    tokenize(
      unlist(
        segment(text, what = "other", delimiter = "(?<=\\</s\\>)", perl = TRUE)),
      what = "fastestword"),
    n = 3L))
#[1] "<s>I'm_a_sentence"          "a_sentence_and"            
#[3] "sentence_and_I'd"           "and_I'd_better"            
#[5] "I'd_better_be"              "better_be_formatted"       
#[7] "be_formatted_properly!</s>" "<s>I'm_a_second"           
#[9] "a_second_sentence</s>" 

我将自定义选项(例如removePunct = TRUE等)留给您。