答案 0 :(得分:15)
这取决于您对类似的定义有多严格。
正如others指出的那样,您可以使用类似latent semantic analysis或相关latent Dirichlet allocation的内容。
与pointed out一样,您可能希望将现有资源用于此类内容。
许多研究论文(example)使用术语语义相似性。计算的基本思想通常是通过在图表上找到两个单词之间的distance来完成的,如果单词是其父类型,则单词是子单词。例如:“鸣鸟”将是“鸟”的孩子。如果您愿意,语义相似性可以用作创建聚类的距离度量。
此外,如果您对某些语义相似性度量的值设置阈值,则可以获得布尔值True
或False
。这是我创建的一个要点(word_similarity.py),它使用NLTK's语料库阅读器WordNet。希望这会指向正确的方向,并为您提供更多搜索条件。
def sim(word1, word2, lch_threshold=2.15, verbose=False):
"""Determine if two (already lemmatized) words are similar or not.
Call with verbose=True to print the WordNet senses from each word
that are considered similar.
The documentation for the NLTK WordNet Interface is available here:
http://nltk.googlecode.com/svn/trunk/doc/howto/wordnet.html
"""
from nltk.corpus import wordnet as wn
results = []
for net1 in wn.synsets(word1):
for net2 in wn.synsets(word2):
try:
lch = net1.lch_similarity(net2)
except:
continue
# The value to compare the LCH to was found empirically.
# (The value is very application dependent. Experiment!)
if lch >= lch_threshold:
results.append((net1, net2))
if not results:
return False
if verbose:
for net1, net2 in results:
print net1
print net1.definition
print net2
print net2.definition
print 'path similarity:'
print net1.path_similarity(net2)
print 'lch similarity:'
print net1.lch_similarity(net2)
print 'wup similarity:'
print net1.wup_similarity(net2)
print '-' * 79
return True
示例输出
>>> sim('college', 'academy')
True
>>> sim('essay', 'schoolwork')
False
>>> sim('essay', 'schoolwork', lch_threshold=1.5)
True
>>> sim('human', 'man')
True
>>> sim('human', 'car')
False
>>> sim('fare', 'food')
True
>>> sim('fare', 'food', verbose=True)
Synset('fare.n.04')
the food and drink that are regularly served or consumed
Synset('food.n.01')
any substance that can be metabolized by an animal to give energy and build tissue
path similarity:
0.5
lch similarity:
2.94443897917
wup similarity:
0.909090909091
-------------------------------------------------------------------------------
True
>>> sim('bird', 'songbird', verbose=True)
Synset('bird.n.01')
warm-blooded egg-laying vertebrates characterized by feathers and forelimbs modified as wings
Synset('songbird.n.01')
any bird having a musical call
path similarity:
0.25
lch similarity:
2.25129179861
wup similarity:
0.869565217391
-------------------------------------------------------------------------------
True
>>> sim('happen', 'cause', verbose=True)
Synset('happen.v.01')
come to pass
Synset('induce.v.02')
cause to do; cause to act in a specified manner
path similarity:
0.333333333333
lch similarity:
2.15948424935
wup similarity:
0.5
-------------------------------------------------------------------------------
Synset('find.v.01')
come upon, as if by accident; meet with
Synset('induce.v.02')
cause to do; cause to act in a specified manner
path similarity:
0.333333333333
lch similarity:
2.15948424935
wup similarity:
0.5
-------------------------------------------------------------------------------
True
答案 1 :(得分:3)
我认为您可以构建自己的数据库来制作ML和NLP技术,但您也可以考虑查询WordNet等现有资源来完成工作。
答案 2 :(得分:2)
如果您有大量与感兴趣的主题相关的文档,您可能需要查看Latent Direchlet Allocation。 LDA是一种相当标准的NLP技术,可以自动将单词聚合成主题,其中单词之间的相似性由同一文档中的搭配决定(如果能够更好地满足您的需求,您可以将单个句子视为文档)。
您会找到许多可用的LDA工具包。在推荐一个问题之前,我们需要更多有关您确切问题的详细信息。无论如何,我还不足以提出这个建议,但我至少可以建议你看看LDA。
答案 3 :(得分:1)
答案 4 :(得分:1)
Word2Vec可以发挥作用以查找相似的单词(上下文/语义上)。在word2vec中,我们在n维空间中将单词作为向量,并且可以计算单词之间的距离(欧几里德距离)或者可以简单地生成集群。
在此之后,我们可以为相似性b / w 2个单词提出一些数值。