使用字典在Quanteda中创建Bigram

时间:2015-12-26 19:41:36

标签: r quanteda

我正在尝试从数据文本分析中删除拼写错误。所以我使用的是quanteda包的字典功能。它适用于Unigrams。但它为Bigrams提供了意想不到的输出。不知道如何处理错别字以便他们不会潜入我的Bigrams和Trigrams。

ZTestCorp1 <- c("The new law included a capital gains tax, and an inheritance tax.", 
                "New York City has raised a taxes: an income tax and a sales tax.")

ZcObj <- corpus(ZTestCorp1)

mydict <- dictionary(list("the"="the", "new"="new", "law"="law", 
                      "capital"="capital", "gains"="gains", "tax"="tax", 
                      "inheritance"="inheritance", "city"="city")) 

Zdfm1 <- dfm(ZcObj, ngrams=2, concatenator=" ", 
         what = "fastestword", 
         toLower=TRUE, removeNumbers=TRUE,
         removePunct=TRUE, removeSeparators=TRUE,
         removeTwitter=TRUE, stem=FALSE,
         ignoredFeatures=NULL,
         language="english", 
         dictionary=mydict, valuetype="fixed")

wordsFreq1 <- colSums(sort(Zdfm1))

当前输出

> wordsFreq1
    the         new         law     capital       gains         tax inheritance        city 
      0           0           0           0           0           0           0           0 

不使用字典,输出如下:

> wordsFreq
    tax and         the new         new law    law included      included a       a capital 
          2               1               1               1               1               1 
capital gains       gains tax          and an  an inheritance inheritance tax        new york 
          1               1               1               1               1               1 
  york city        city has      has raised        raised a         a taxes        taxes an 
          1               1               1               1               1               1 
  an income      income tax           and a         a sales       sales tax 
          1               1               1               1               1

预期的Bigram

The new
new law
law capital
capital gains
gains tax
tax inheritance
inheritance city  

P.S。我假设在字典匹配后完成了标记化。但根据我看到的结果,情况并非如此。

另一方面,我试图将我的字典对象创建为

mydict <- dictionary(list(mydict=c("the", "new", "law", "capital", "gains", 
                      "tax", "inheritance", "city"))) 

但它没有用。所以我不得不使用上面认为效率不高的方法。

更新 根据Ken的解决方案添加了输出:

> (myDfm1a <- dfm(ZcObj, verbose = FALSE, ngrams=2, 
+                keptFeatures = c("the", "new", "law", "capital", "gains",  "tax", "inheritance", "city")))
Document-feature matrix of: 2 documents, 14 features.
2 x 14 sparse Matrix of class "dfmSparse" features
docs    the_new new_law law_included a_capital capital_gains gains_tax   tax_and an_inheritance
text1       1       1            1         1             1         1       1               1
text2       0       0            0         0             0         0       1              0
   features
docs    inheritance_tax new_york york_city city_has income_tax sales_tax
text1               1        0         0        0          0         0
text2               0        1         1        1          1         1

1 个答案:

答案 0 :(得分:6)

更新了2017年6月21日更新版本的quanteda

很高兴看到你正在使用这个包!我认为你正在努力解决的问题有两个。第一个是如何在形成ngrams之前应用特征选择。第二个是如何定义特征选择(使用quanteda)。

第一期:如何在形成ngrams之前应用特征选择。在这里您已经定义了一个字典来执行此操作。 (正如我将在下面展示的那样,这里没有必要。)你想删除不在选择列表中的所有术语,然后形成bigrams。 quanteda默认不这样做,因为它不是&#34; bigram&#34;的标准形式,其中的单词不是根据严格按邻接定义的某个窗口并置。例如,在您的预期结果中,law capital不是一对相邻的术语,这是bigram的通常定义。

但是,我们可以通过更多地手动构建文档功能矩阵来覆盖此行为。

首先,标记文本。

# tokenize the original
toks <- tokens(ZcObj, removePunct = TRUE, removeNumbers = TRUE) %>%
  tokens_tolower()
toks
## tokens object from 2 documents.
## text1 :
##  [1] "the"         "new"         "law"         "included"    "a"           "capital"     "gains"       "tax"         "and"         "an"          "inheritance" "tax"        
## 
## text2 :
##  [1] "new"    "york"   "city"   "has"    "raised" "a"      "taxes"  "an"     "income" "tax"    "and"    "a"      "sales"  "tax"  

现在,我们使用mydict

将您的字典tokens_select()应用于标记化文本
(toksDict <- tokens_select(toks, mydict, selection = "keep"))
## tokens object from 2 documents.
## text1 :
##  [1] "the"         "new"         "law"         "capital"     "gains"       "tax"         "inheritance" "tax"        
## 
## text2 :
##  [1] "new"  "city" "tax"  "tax" 

从这个选定的令牌集中,我们现在可以形成双字母组合(或者我们可以直接向toksDict提供dfm()):

(toks2 <- tokens_ngrams(toksDict, n = 2, concatenator = " "))
## tokens object from 2 documents.
## text1 :
##  [1] "the new"         "new law"         "law capital"     "capital gains"   "gains tax"       "tax inheritance" "inheritance tax"
## 
## text2 :
##  [1] "new city" "city tax" "tax tax" 

# now create the dfm
(myDfm2 <- dfm(toks2))
## Document-feature matrix of: 2 documents, 10 features.
## 2 x 10 sparse Matrix of class "dfm"
##        features
## docs    the new new law law capital capital gains gains tax tax inheritance inheritance tax new city city tax tax tax
##   text1       1       1           1             1         1               1               1        0        0       0
##   text2       0       0           0             0         0               0               0        1        1       1
topfeatures(myDfm2)
#     the new         new law     law capital   capital gains       gains tax tax inheritance inheritance tax        new city        city tax         tax tax 
#           1               1               1               1               1               1               1               1               1               1 

功能列表现在非常接近你想要的。

第二个问题是您的词典方法效率低下的原因。这是因为您正在创建一个字典来执行特征选择但不是真的将其用作字典 - 换句话说,一个字典,其中每个键等于它自己的键,因为值实际上不是字典。只需给它一个选择标记的字符向量,它就可以正常工作,例如:

(myDfm1 <- dfm(ZcObj, verbose = FALSE, 
               keptFeatures = c("the", "new", "law", "capital", "gains", "tax", "inheritance", "city")))
## Document-feature matrix of: 2 documents, 8 features.
## 2 x 8 sparse Matrix of class "dfm"
##        features
## docs    the new law capital gains tax inheritance city
##   text1   1   1   1       1     1   2           1    0
##   text2   0   1   0       0     0   2           0    1