R文本挖掘与quanteda

时间:2015-06-24 14:37:57

标签: r text-mining text-analysis quanteda

我有一个数据集(Facebook帖子)(通过netvizz),我在R中使用quanteda包。这是我的R代码。

# Load the relevant dictionary (relevant for analysis)
liwcdict <- dictionary(file = "D:/LIWC2001_English.dic", format = "LIWC")

# Read File
# Facebooks posts could be generated by  FB Netvizz 
# https://apps.facebook.com/netvizz
# Load FB posts as .csv-file from .zip-file 
fbpost <- read.csv("D:/FB-com.csv", sep=";")

# Define the relevant column(s)
fb_test <-as.character(FB_com$comment_message) #one column with 2700 entries
# Define as corpus
fb_corp <-corpus(fb_test)
class(fb_corp)

# LIWC Application
fb_liwc<-dfm(fb_corp, dictionary=liwcdict)
View(fb_liwc)

一切正常,直到:

> fb_liwc<-dfm(fb_corp, dictionary=liwcdict)
Creating a dfm from a corpus ...
   ... indexing 2,760 documents
   ... tokenizing texts, found 77,923 total tokens
   ... cleaning the tokens, 1584 removed entirely
   ... applying a dictionary consisting of 68 key entries
Error in `dimnames<-.data.frame`(`*tmp*`, value = list(docs = c("text1",  : 
  invalid 'dimnames' given for data frame

您如何解释错误消息?有什么建议可以解决这个问题吗?

1 个答案:

答案 0 :(得分:1)

quanteda版本0.7.2中存在一个错误,当其中一个文档不包含任何功能时,导致dfm()在使用字典时失败。你的例子失败了,因为在清洁阶段,一些Facebook帖子&#34;文件&#34;最终通过清洁步骤删除了所有功能。

这不仅在0.8.0中得到修复,而且还改变了dfm()中词典的基础实现,从而显着提高了速度。 (LIWC仍然是一个庞大而复杂的字典,正则表达式仍然意味着使用它比简单地索引标记要慢得多。我们将进一步优化它。)

devtools::install_github("kbenoit/quanteda")
liwcdict <- dictionary(file = "LIWC2001_English.dic", format = "LIWC")
mydfm <- dfm(inaugTexts, dictionary = liwcdict)
## Creating a dfm from a character vector ...
##    ... indexing 57 documents
##    ... lowercasing
##    ... tokenizing
##    ... shaping tokens into data.table, found 134,024 total tokens
##    ... applying a dictionary consisting of 68 key entries
##    ... summing dictionary-matched features by document
##    ... indexing 68 feature types
##    ... building sparse matrix
##    ... created a 57 x 68 sparse dfm
##    ... complete. Elapsed time: 14.005 seconds.
topfeatures(mydfm, decreasing=FALSE)
## Fillers   Nonfl   Swear      TV  Eating   Sleep   Groom   Death  Sports  Sexual 
##       0       0       0      42      47      49      53      76      81     100 

如果文档在标记化和清理后包含零个功能,这也可能会有效,这可能会破坏您使用Facebook文本的旧版dfm

mytexts <- inaugTexts
mytexts[3] <- ""
mydfm <- dfm(mytexts, dictionary = liwcdict, verbose = FALSE)
which(rowSums(mydfm)==0)
## 1797-Adams 
##          3