这个问题是双重的。回答任何一个问题将是一个适当的解决方案。非常感谢您可以将建议显示为R代码。
1)Syuzhet数据包中的NRC词典产生的情绪范围最广,但它似乎无法控制否定者。阅读文档后,我仍然不确定如何克服这个问题。也许通过将每个句子的正负编码词相乘,例如I(0)AM(0)NOT(-1)ANGRY(-1)=(-1 * -1)=1。但是,我不知道如何用适当的代码编写。
2)经过大量研究和测试,我发现SentimentR中的jockers_rinker词典可处理否定词并进行更好的修饰(https://github.com/trinker/sentimentr#comparing-sentimentr-syuzhet-meanr-and-stanford)。通过比较两个软件包的二进制情感输出,我可以使用SentimentR对Suyzhet / NRC结果的结果进行“质量测试”。如果它们偏差太大,则NRC对于该特定文本正文不够准确。但是,我只知道如何获得个人分数,而不是每种情感的总分数(正值总和和负值总和)
您可以在这里看到我的测试结果如何与带有和不带有修饰符和否定符的情绪连接的字符串进行比较。
#Suyzhet:
library("syuzhet")
MySentiments = c("I am happy", "I am very happy", "I am not happy","It was
bad","It is never bad", "I love it", "I hate it")
get_nrc_sentiment(MySentiment, cl = NULL, language = "english")
#Result:
anger anticipation disgust fear joy sadness surprise trust negative positive
0 1 0 0 1 0 0 1 0 1
0 1 0 0 1 0 0 1 0 1
0 1 0 0 1 0 0 1 0 1
1 0 1 1 0 1 0 0 1 0
1 0 1 1 0 1 0 0 1 0
0 0 0 0 1 0 0 0 0 1
1 0 1 1 0 1 0 0 1 0
#SentimentR:
library("sentimentr")
MySentiments = c("I am happy", "I am very happy", "I am not happy","It was
bad","It is never bad", "I love it", "I hate it")
sentiment(MySentiments, polarity_dt =
lexicon::hash_sentiment_jockers_rinker,
valence_shifters_dt = lexicon::hash_valence_shifters, hyphen
= "", amplifier.weight = 0.8, n.before = 5, n.after = 2,
question.weight = 1, adversative.weight = 0.25,
neutral.nonverb.like = FALSE, missing_value = NULL)
#Results:
element_id sentence_id word_count sentiment
1 1 3 0.4330127
2 1 4 0.6750000
3 1 4 -0.3750000
4 1 3 -0.4330127
5 1 4 0.3750000
6 1 3 0.4330127
7 1 3 -0.4330127
第一个输出似乎没有意识到“非常”,“不是”和“从不”的重要性。