我试图从包含底片的医学报告中提取(并最终分类)句子。一个例子是:
samples<-c('There is no evidence of a lump','Neither a contusion nor a scar was seen','No inflammation was evident','We found generalised badness here')
我正在尝试使用sentimentr
包,因为它似乎能够检测到否定词。有没有办法只使用否定词的检测,以便提取出否定句子(最好是用于进一步工作的新数据帧)?
使用polarity
中的qdap
只会提供摘要统计信息,并且基于我不想包括的放大器和放大器等。
polarity(samples,negators = qdapDictionaries::negation.words)
all total.sentences total.words ave.polarity sd.polarity stan.mean.polarity
1 all 4 24 0.213 0.254 0.842
我尝试了sentimentr包,如下所示:
extract_sentiment_terms(MyColonData$Endo_ResultText,polarity_dt = lexicon::hash_sentiment_jockers, hyphen = "")
这给了我中立,消极和积极的话语:
element_id sentence_id negative positive
1: 1 1
2: 2 1 scar
3: 3 1 inflammation evident
4: 4 1 badness found
但我真的在寻找包含否定 的句子而不解释情绪,以便输出为:
element_id sentence_id negative positive
1: 1 1 There is no evidence of a lump
2: 2 1 Neither a contusion nor a scar was seen
3: 3 1 No inflammation was evident
4: 4 1 We found generalised badness here
答案 0 :(得分:3)
我认为你只想根据negator
的存在对文本进行正面和负面分类,因此从lexicon
中提取否定器应该有所帮助。
samples<-c('There is no evidence of a lump','Neither a contusion nor a scar was seen','No inflammation was evident','We found generalised badness here')
polarity <- data.frame(text = samples, pol = NA)
polarity$pol <- ifelse(grepl(paste(lexicon::hash_valence_shifters[y==1]$x,collapse = '|'), tolower(samples)),'Negative','Positive')
polarity
text pol
1 There is no evidence of a lump Negative
2 Neither a contusion nor a scar was seen Negative
3 No inflammation was evident Negative
4 We found generalised badness here Positive
格式化OP:
reshape2::dcast(polarity,text~pol)
text Negative Positive
1 Neither a contusion nor a scar was seen Negative <NA>
2 No inflammation was evident Negative <NA>
3 There is no evidence of a lump Negative <NA>
4 We found generalised badness here <NA> Positive
答案 1 :(得分:2)
如果我理解正确,如果他们的一个单词与lexicon::hash_sentiment_jockers
中的正面或负面注释匹配,则需要提取整个句子。在这种情况下,您可以使用以下代码(如果需要,可以在临时步骤中使用data.table
进行调整)。我希望这就是你要找的东西。
library(lexicon)
library(data.table)
library(stringi)
#check the content of the lexicon
lex <- copy(lexicon::hash_sentiment_jockers)
# x y
# 1: abandon -0.75
# 2: abandoned -0.50
# 3: abandoner -0.25
# 4: abandonment -0.25
# 5: abandons -1.00
# ---
# 10735: zealous 0.40
# 10736: zenith 0.40
# 10737: zest 0.50
# 10738: zombie -0.25
# 10739: zombies -0.25
#only consider binary positive or negative
pos <- lex[y > 0]
neg <- lex[y < 0]
samples <-c('There is no evidence of a lump'
,'Neither a contusion nor a scar was seen'
,'No inflammation was evident'
,'We found generalised badness here')
#get ids of the samples that inlcude positve/negative terms
samples_pos <- which(stri_detect_regex(samples, paste(pos[,x], collapse = "|")))
samples_neg <- which(stri_detect_regex(samples, paste(neg[,x], collapse = "|")))
#set up data.frames with all positive/negative samples and their ids
df_pos <- data.frame(sentence_id = samples_pos, positive = samples[samples_pos])
df_neg <- data.frame(sentence_id = samples_neg, negative = samples[samples_neg])
#combine the sets
rbindlist(list(df_pos, df_neg), use.names = TRUE, fill = T)
# sentence_id positive negative
# 1: 3 No inflammation was evident NA
# 2: 4 We found generalised badness here NA
# 3: 2 NA Neither a contusion nor a scar was seen
# 4: 3 NA No inflammation was evident
# 5: 4 NA We found generalised badness here
#the first sentence is missing, since none of its words is inlcuded in
#the lexcicon, you might use stemming, etc. to increase coverage
any(grepl("evidence", lexicon::hash_sentiment_jockers[,x]))
#[1] FALSE