用词典中的短语进行R情感分析

时间:2015-09-04 09:53:58

标签: r twitter machine-learning sentiment-analysis

我正在对一组推文进行情绪分析,我现在想知道如何在正面和负面词典中添加短语。

我已经阅读了我要测试的短语的文件,但在运行情感分析时,它并没有给我一个结果。

在阅读情感算法时,我可以看到它是将字词与词典相匹配,但有没有办法扫描单词和短语?

以下是代码:

    score.sentiment = function(sentences, pos.words, neg.words, .progress='none')
{
  require(plyr)  
  require(stringr)  
  # we got a vector of sentences. plyr will handle a list  
  # or a vector as an "l" for us  
  # we want a simple array ("a") of scores back, so we use  
  # "l" + "a" + "ply" = "laply":  
  scores = laply(sentences, function(sentence, pos.words, neg.words) {
    # clean up sentences with R's regex-driven global substitute, gsub():
    sentence = gsub('[[:punct:]]', '', sentence)
    sentence = gsub('[[:cntrl:]]', '', sentence)
    sentence = gsub('\\d+', '', sentence)    
    # and convert to lower case:    
    sentence = tolower(sentence)    
    # split into words. str_split is in the stringr package    
    word.list = str_split(sentence, '\\s+')    
    # sometimes a list() is one level of hierarchy too much    
    words = unlist(word.list)    
    # compare our words to the dictionaries of positive & negative terms
    pos.matches = match(words, pos)
    neg.matches = match(words, neg)   
    # match() returns the position of the matched term or NA    
    # we just want a TRUE/FALSE:    
    pos.matches = !is.na(pos.matches)   
    neg.matches = !is.na(neg.matches)   
    # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():
    score = sum(pos.matches) - sum(neg.matches)    
    return(score)    
  }, pos.words, neg.words, .progress=.progress )  
  scores.df = data.frame(score=scores, text=sentences)  
  return(scores.df)  
}
analysis=score.sentiment(Tweets, pos, neg)
table(analysis$score)

这是我得到的结果:

0
20

而我在此功能提供的标准表之后 e.g。

-2 -1 0 1 2 
 1  2 3 4 5 

例如。

有没有人对如何在短语上运行这个有什么想法? 注意:TWEETS文件是一个句子文件。

1 个答案:

答案 0 :(得分:1)

函数score.sentiment似乎有效。如果我尝试一个非常简单的设置,

Tweets = c("this is good", "how bad it is")
neg = c("bad")
pos = c("good")
analysis=score.sentiment(Tweets, pos, neg)
table(analysis$score)

我得到了预期的结果,

> table(analysis$score)

-1  1 
 1  1 

你如何将20条推文提供给方法?从你发布的结果0 20开始,我会说你的问题是你的20条推文中没有任何正面或负面的词,尽管你会注意到它的情况。也许如果你在推文列表上发布更多细节,你的正面和负面的话会更容易帮助你。

无论如何,你的功能似乎工作正常。

希望它有所帮助。

通过评论澄清后编辑:

实际上,要解决您的问题,您需要将您的句子标记为n-grams,其中n将对应于您的正面和负面列表中使用的最大单词数{{1} }。你可以看到如何做到这一点,例如在this SO question。为了完整性,并且因为我自己测试过,这里有一个例子,你可以做什么。我将其简化为n-grams(n = 2)并使用以下输入:

bigrams

您可以像这样创建一个bigram标记器,

Tweets = c("rewarding hard work with raising taxes and VAT. #LabourManifesto", 
              "Ed Miliband is offering 'wrong choice' of 'more cuts' in #LabourManifesto")
pos = c("rewarding hard work")
neg = c("wrong choice")

测试一下,

library(tm)
library(RWeka)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=2,max=2))

然后在您的方法中,您只需替换此行,

> BigramTokenizer("rewarding hard work with raising taxes and VAT. #LabourManifesto")
[1] "rewarding hard"       "hard work"            "work with"           
[4] "with raising"         "raising taxes"        "taxes and"           
[7] "and VAT"              "VAT #LabourManifesto"

由此

word.list = str_split(sentence, '\\s+')

当然,如果您将word.list = BigramTokenizer(sentence) 更改为word.list或类似的内容会更好。

结果是,正如所料,

ngram.list

只需确定> table(analysis$score) -1 0 1 1 尺寸并将其添加到n-gram即可。您应该没问题。

希望它有所帮助。