我有两条需要处理的推文。我试图找到消息的出现,这意味着对一个人有害。我如何通过NLP实现这一目标
I bought my son a toy gun
I shot my neighbor with a gun
I don't like this gun
I would love to own this gun
This gun is a very good buy
Feel like shooting myself with a gun
在上面的句子中,第2个,第6个是我想要找到的。
答案 0 :(得分:1)
如果问题仅限于枪支和射击,那么您可以使用依赖解析器(如斯坦福分析器)来查找动词及其(介词)对象,从动词开始并在解析树中跟踪其依赖项。例如,在2和6中,这些将是“射击,用枪”。
然后你可以使用(近)同义词列表“射击”(“杀戮”,“谋杀”,“伤口”等)和“枪”(“武器”,“步枪”等)来检查如果它们出现在每个句子中的这种模式(动词 - 介词 - 名词)中。
还有其他方法可以表达相同的想法,例如: “我买了一把枪射击我的邻居”,其中依赖关系不同,你也需要检测这些类型的依赖。
答案 1 :(得分:1)
所有vpekar的建议都很好。这是一些python代码,它至少会解析句子并查看它们是否包含用户定义的一组伤害词中的动词。注意:大多数“伤害词”可能有多种感官,其中许多可能与伤害无关。这种方法并不试图消除单词意义的歧义。
(此代码假设您拥有NLTK和Stanford CoreNLP)
import os
import subprocess
from xml.dom import minidom
from nltk.corpus import wordnet as wn
def StanfordCoreNLP_Plain(inFile):
#Create the startup info so the java program runs in the background (for windows computers)
startupinfo = None
if os.name == 'nt':
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
#Execute the stanford parser from the command line
cmd = ['java', '-Xmx1g','-cp', 'stanford-corenlp-1.3.5.jar;stanford-corenlp-1.3.5-models.jar;xom.jar;joda-time.jar', 'edu.stanford.nlp.pipeline.StanfordCoreNLP', '-annotators', 'tokenize,ssplit,pos', '-file', inFile]
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, startupinfo=startupinfo).communicate()
outFile = file(inFile[(str(inFile).rfind('\\'))+1:] + '.xml')
xmldoc = minidom.parse(outFile)
itemlist = xmldoc.getElementsByTagName('sentence')
Document = []
#Get the data out of the xml document and into python lists
for item in itemlist:
SentNum = item.getAttribute('id')
sentList = []
tokens = item.getElementsByTagName('token')
for d in tokens:
word = d.getElementsByTagName('word')[0].firstChild.data
pos = d.getElementsByTagName('POS')[0].firstChild.data
sentList.append([str(pos.strip()), str(word.strip())])
Document.append(sentList)
return Document
def FindHarmSentence(Document):
#Loop through sentences in the document. Look for verbs in the Harm Words Set.
VerbTags = ['VBN', 'VB', 'VBZ', 'VBD', 'VBG', 'VBP', 'V']
HarmWords = ("shoot", "kill")
ReturnSentences = []
for Sentence in Document:
for word in Sentence:
if word[0] in VerbTags:
try:
wordRoot = wn.morphy(word[1],wn.VERB)
if wordRoot in HarmWords:
print "This message could indicate harm:" , str(Sentence)
ReturnSentences.append(Sentence)
except: pass
return ReturnSentences
#Assuming your input is a string, we need to put the strings in some file.
Sentences = "I bought my son a toy gun. I shot my neighbor with a gun. I don't like this gun. I would love to own this gun. This gun is a very good buy. Feel like shooting myself with a gun."
ProcessFile = "ProcFile.txt"
OpenProcessFile = open(ProcessFile, 'w')
OpenProcessFile.write(Sentences)
OpenProcessFile.close()
#Sentence split, tokenize, and part of speech tag the data using Stanford Core NLP
Document = StanfordCoreNLP_Plain(ProcessFile)
#Find sentences in the document with harm words
HarmSentences = FindHarmSentence(Document)
这输出以下内容:
此消息可能表示有害:[['PRP','I'],['VBD','shot'],['PRP $','my'],['NN','邻居'] ,['IN','with'],['DT','a'],['NN','gun'],['。','。']]
此消息可能表示有害:[['NNP','感觉'],['IN','喜欢'],['VBG','射击'],['PRP','我自己'], ['IN','with'],['DT','a'],['NN','gun'],['。','。']]
答案 2 :(得分:0)