以下两段代码之间是否有任何区别:
distances = ((jaccard_distance(set(nltk.ngrams(entry, gram_number)),
set(nltk.ngrams(word, gram_number))), word)
for word in spellings)
和
for word in spellings:
distances = ((jaccard_distance(set(nltk.ngrams(entry, gram_number)),
set(nltk.ngrams(word, gram_number))), word))
究竟有什么区别?在此先感谢您的帮助
答案 0 :(得分:2)
获得2袋单词之间的Jaccard距离,即2个句子的独特词汇。
>>> from nltk.metrics import jaccard_distance
>>> from nltk import ngrams
>>> sent1 = "This is a foo bar sentence".split()
>>> sent2 = "A bar bar black sheep have you a sentence".split()
>>> set(sent1) # A list of unique words in sent1
set(['a', 'bar', 'sentence', 'This', 'is', 'foo'])
>>> set(sent2) # A list of unique words in sent2
set(['A', 'sheep', 'bar', 'sentence', 'black', 'a', 'have', 'you'])
>>> jaccard_distance(set(sent1), set(sent2))
0.7272727272727273
现在,如果它是一袋ngrams:
>>> list(ngrams(sent2, 3)) # list of tri-grams in sent2.
[('A', 'bar', 'bar'), ('bar', 'bar', 'black'), ('bar', 'black', 'sheep'), ('black', 'sheep', 'have'), ('sheep', 'have', 'you'), ('have', 'you', 'a'), ('you', 'a', 'sentence')]
>>> set(list(ngrams(sent2, 3))) # unique set of tri-grams in sent2.
set([('A', 'bar', 'bar'), ('have', 'you', 'a'), ('you', 'a', 'sentence'), ('sheep', 'have', 'you'), ('black', 'sheep', 'have'), ('bar', 'black', 'sheep'), ('bar', 'bar', 'black')])
>>> set(ngrams(sent2, 3))
set([('A', 'bar', 'bar'), ('have', 'you', 'a'), ('you', 'a', 'sentence'), ('sheep', 'have', 'you'), ('black', 'sheep', 'have'), ('bar', 'black', 'sheep'), ('bar', 'bar', 'black')])
>>> set(ngrams(sent1, 3))
set([('This', 'is', 'a'), ('a', 'foo', 'bar'), ('is', 'a', 'foo'), ('foo', 'bar', 'sentence')])
>>> jaccard_distance(set(ngrams(sent1,3)), set(ngrams(sent2, 3)))
1.0
Jaccard距离1.0是什么意思?
这意味着比较的2个序列是完全不同的,在这种情况下是来自每个句子的唯一ngrams集。
以前,我们将一个句子字符串拆分为字符串列表,当我们比较两个序列时,他们正在比较句子中的单词/ ngram。
现在如果我们迭代2个单词而不是句子,我们将单词分成一个字符列表,即
>>> word1 = 'Supercalifragilisticexpialidocious'
>>> word2 = 'Honorificabilitudinitatibus'
>>> list(word1) # The list of characters in the word
['S', 'u', 'p', 'e', 'r', 'c', 'a', 'l', 'i', 'f', 'r', 'a', 'g', 'i', 'l', 'i', 's', 't', 'i', 'c', 'e', 'x', 'p', 'i', 'a', 'l', 'i', 'd', 'o', 'c', 'i', 'o', 'u', 's']
>>> set(list(word1)) # The set of unique characters in the word
set(['a', 'c', 'e', 'd', 'g', 'f', 'i', 's', 'l', 'o', 'p', 'S', 'r', 'u', 't', 'x'])
>>> set(ngrams(word1, 3)) # The set of unique character trigrams in the word.
set([('c', 'a', 'l'), ('S', 'u', 'p'), ('t', 'i', 'c'), ('d', 'o', 'c'), ('f', 'r', 'a'), ('i', 'f', 'r'), ('r', 'a', 'g'), ('i', 's', 't'), ('s', 't', 'i'), ('x', 'p', 'i'), ('u', 'p', 'e'), ('o', 'u', 's'), ('i', 'c', 'e'), ('l', 'i', 'f'), ('p', 'e', 'r'), ('o', 'c', 'i'), ('g', 'i', 'l'), ('l', 'i', 'd'), ('i', 'l', 'i'), ('c', 'i', 'o'), ('r', 'c', 'a'), ('l', 'i', 's'), ('a', 'g', 'i'), ('p', 'i', 'a'), ('i', 'o', 'u'), ('e', 'x', 'p'), ('i', 'a', 'l'), ('c', 'e', 'x'), ('a', 'l', 'i'), ('i', 'd', 'o'), ('e', 'r', 'c')])
要获得他们之间的Jaccard距离:
>>> jaccard_distance(set(ngrams(word1, 3)), set(ngrams(word2, 3)))
0.9818181818181818
现在谈到OP的问题:
distances = ((jaccard_distance(set(nltk.ngrams(entry, gram_number)),
set(nltk.ngrams(word, gram_number))), word)
for word in spellings)
vs
for word in spellings:
distances = ((jaccard_distance(set(nltk.ngrams(entry, gram_number)),
set(nltk.ngrams(word, gram_number))), word))
您可以尝试做的第一件事是简化代码:
您可以这样做,而不是一次又一次地输入nltk.ngrams(...)
:
>>> from nltk import ngrams
>>> list(ngrams('foobar', 3))
[('f', 'o', 'o'), ('o', 'o', 'b'), ('o', 'b', 'a'), ('b', 'a', 'r')]
如果你专门使用2或3的n-gram命令,即双字母或三卦,你可以这样做:
>>> from nltk import bigrams, trigrams
>>> list(bigrams('foobar'))
[('f', 'o'), ('o', 'o'), ('o', 'b'), ('b', 'a'), ('a', 'r')]
>>> list(trigrams('foobar'))
[('f', 'o', 'o'), ('o', 'o', 'b'), ('o', 'b', 'a'), ('b', 'a', 'r')]
如果你想得到想象并为你想要的ngrams顺序制作一个自定义函数,你可以尝试functools.partial
:
>>> from functools import partial
>>> from nltk import ngrams
>>> octagram = partial(ngrams, n=8)
>>> word = 'Supercalifragilisticexpialidocious'
>>> octagram(word)
<generator object ngrams at 0x10cafff00>
>>> list(octagram(word))
[('S', 'u', 'p', 'e', 'r', 'c', 'a', 'l'), ('u', 'p', 'e', 'r', 'c', 'a', 'l', 'i'), ('p', 'e', 'r', 'c', 'a', 'l', 'i', 'f'), ('e', 'r', 'c', 'a', 'l', 'i', 'f', 'r'), ('r', 'c', 'a', 'l', 'i', 'f', 'r', 'a'), ('c', 'a', 'l', 'i', 'f', 'r', 'a', 'g'), ('a', 'l', 'i', 'f', 'r', 'a', 'g', 'i'), ('l', 'i', 'f', 'r', 'a', 'g', 'i', 'l'), ('i', 'f', 'r', 'a', 'g', 'i', 'l', 'i'), ('f', 'r', 'a', 'g', 'i', 'l', 'i', 's'), ('r', 'a', 'g', 'i', 'l', 'i', 's', 't'), ('a', 'g', 'i', 'l', 'i', 's', 't', 'i'), ('g', 'i', 'l', 'i', 's', 't', 'i', 'c'), ('i', 'l', 'i', 's', 't', 'i', 'c', 'e'), ('l', 'i', 's', 't', 'i', 'c', 'e', 'x'), ('i', 's', 't', 'i', 'c', 'e', 'x', 'p'), ('s', 't', 'i', 'c', 'e', 'x', 'p', 'i'), ('t', 'i', 'c', 'e', 'x', 'p', 'i', 'a'), ('i', 'c', 'e', 'x', 'p', 'i', 'a', 'l'), ('c', 'e', 'x', 'p', 'i', 'a', 'l', 'i'), ('e', 'x', 'p', 'i', 'a', 'l', 'i', 'd'), ('x', 'p', 'i', 'a', 'l', 'i', 'd', 'o'), ('p', 'i', 'a', 'l', 'i', 'd', 'o', 'c'), ('i', 'a', 'l', 'i', 'd', 'o', 'c', 'i'), ('a', 'l', 'i', 'd', 'o', 'c', 'i', 'o'), ('l', 'i', 'd', 'o', 'c', 'i', 'o', 'u'), ('i', 'd', 'o', 'c', 'i', 'o', 'u', 's')]
不是重写set(nltk.ngrams(word, gram_number))
,而是uco(word)
:
>>> from nltk import ngrams
>>> def unique_character_octagrams(text, n=8):
... return set(ngrams(text, n))
...
>>> uco = unique_character_octagrams
>>> uco(word1)
set([('e', 'x', 'p', 'i', 'a', 'l', 'i', 'd'), ('S', 'u', 'p', 'e', 'r', 'c', 'a', 'l'), ('i', 'c', 'e', 'x', 'p', 'i', 'a', 'l'), ('a', 'g', 'i', 'l', 'i', 's', 't', 'i'), ('t', 'i', 'c', 'e', 'x', 'p', 'i', 'a'), ('i', 'l', 'i', 's', 't', 'i', 'c', 'e'), ('i', 'd', 'o', 'c', 'i', 'o', 'u', 's'), ('c', 'e', 'x', 'p', 'i', 'a', 'l', 'i'), ('l', 'i', 's', 't', 'i', 'c', 'e', 'x'), ('f', 'r', 'a', 'g', 'i', 'l', 'i', 's'), ('l', 'i', 'f', 'r', 'a', 'g', 'i', 'l'), ('i', 'f', 'r', 'a', 'g', 'i', 'l', 'i'), ('p', 'i', 'a', 'l', 'i', 'd', 'o', 'c'), ('a', 'l', 'i', 'f', 'r', 'a', 'g', 'i'), ('x', 'p', 'i', 'a', 'l', 'i', 'd', 'o'), ('e', 'r', 'c', 'a', 'l', 'i', 'f', 'r'), ('l', 'i', 'd', 'o', 'c', 'i', 'o', 'u'), ('g', 'i', 'l', 'i', 's', 't', 'i', 'c'), ('i', 's', 't', 'i', 'c', 'e', 'x', 'p'), ('r', 'c', 'a', 'l', 'i', 'f', 'r', 'a'), ('r', 'a', 'g', 'i', 'l', 'i', 's', 't'), ('i', 'a', 'l', 'i', 'd', 'o', 'c', 'i'), ('p', 'e', 'r', 'c', 'a', 'l', 'i', 'f'), ('a', 'l', 'i', 'd', 'o', 'c', 'i', 'o'), ('u', 'p', 'e', 'r', 'c', 'a', 'l', 'i'), ('c', 'a', 'l', 'i', 'f', 'r', 'a', 'g'), ('s', 't', 'i', 'c', 'e', 'x', 'p', 'i')])
在OP中,您使用for word in spellings
来迭代拼写,但不清楚是什么spellings
。如果您在OP中有spellings
的示例输入,那么最好是答案者不需要在黑暗中猜测究竟是spellings
。
从循环和Jaccard距离使用情况来看,spellings
似乎是一个单词列表,因此更好的变量名称将为list_of_words
,并且迭代将更清晰,无需注释,例如for word in list_of_words
。
此外,entry
变量也不明确,根据用法,它很可能是您要对单词列表执行的查询,因此可能的变量名称为query_word
。< / p>
def unique_character_trigrams(text, n=3):
return set(ngrams(text, n))
uct = unique_character_trigrams
list_of_words = ['Supercalifragilisticexpialidocious', 'Honorificabilitudinitatibus']
query_word = 'Antidisestablishmentarianism'
for word in list_of_words:
d = jaccard_distance(uct(query_word), uct(word))
print("Comparing {} vs {}\nJaccard = {}\n".format(query_word, word, d))
[OUT]:
Comparing Antidisestablishmentarianism vs Supercalifragilisticexpialidocious
Jaccard = 0.982142857143
Comparing Antidisestablishmentarianism vs Honorificabilitudinitatibus
Jaccard = 1.0
现在,真的回到OP问题。让我们来对待:
spelling
为x
,即数字列表entry
为y
,即静态数字word
为num
,即数字列表中的数字jaccard_distance
为f
,一个简单的减法函数。 如果是第一个场景,循环序列内联的这种语法是list comprehension。输出是生成器类型,您必须使用list
实现生成器,并且在生成器内部,每个元素都是f
的输出:
>>> x = [10, 20, 30] # A list of numbers.
>>> y = 3 # A number to compare against the list.
>>> f = lambda x, y: x - y # A simple function to do x - y
>>> f(10,3)
7
>>> f(20,3)
17
>>> result = (f(num,y) for num in x)
>>> result
<generator object <genexpr> at 0x10cafff00>
>>> list(result)
[7, 17, 27]
在第二种情况下,它是更传统的迭代方式,在循环的每次迭代中都会得到一个整数输出:
>>> for num in x:
... result = f(num, y)
... print(type(result), result)
...
(<type 'int'>, 7)
(<type 'int'>, 17)
(<type 'int'>, 27)
答案 1 :(得分:0)
在案例1 :
中距离是一个元组,包含拼写中所有单词的值 如:
(0.1111111111111111, 'hello')
(0.2222222222222222, 'world')
(0.5, 'program')
(0.2727272727272727, 'computer')
(0.0, 'spell')
在案例2 :
中距离被覆盖,因此距离仅包含最后一个值
(0.0, 'spell')