如何找到哪些句子的词汇最多?

时间:2013-11-07 15:39:38

标签: python nltk

我们说我有一个段落。我通过sent_tokenize将其分成句子:

variable = ['By the 1870s the scientific community and much of the general public had accepted evolution as a fact.',
    'However, many favoured competing explanations and it was not until the emergence of the modern evolutionary synthesis from the 1930s to the 1950s that a broad consensus developed in which natural selection was the basic mechanism of evolution.',
    'Darwin published his theory of evolution with compelling evidence in his 1859 book On the Origin of Species, overcoming scientific rejection of earlier concepts of transmutation of species.']

现在我将每个句子分成单词并将其附加到某个变量。如何找到具有最多相同单词数的两组句子。我不知道该怎么做。 如果我有10个句子,那么我将有90个检查(每个句子之间。)谢谢。

2 个答案:

答案 0 :(得分:5)

您可以使用python sets的交集。

如果你有三个句子:

a = "a b c d"
b = "a c x y"
c = "a q v"

您可以通过以下方式检查两个句子中出现的相同单词数:

sameWords = set.intersection(set(a.split(" ")), set(c.split(" ")))
numberOfWords = len(sameWords)

通过这个,您可以遍历您的句子列表,并找到其中包含最相同词汇的两个句子。这给了我们:

sentences = ["a b c d", "a d e f", "c x y", "a b c d x"]

def similar(s1, s2):
    sameWords = set.intersection(set(s1.split(" ")), set(s2.split(" ")))
    return len(sameWords)

currentSimilar = 0
s1 = ""
s2 = ""

for sentence in sentences:
    for sentence2 in sentences:
        if sentence is sentence2:
            continue
        similiarity = similar(sentence, sentence2)
        if (similiarity > currentSimilar):
            s1 = sentence
            s2 = sentence2
            currentSimilar = similiarity

print(s1, s2)

如果性能问题,可能会有一些dynamic programming灵魂问题。

答案 1 :(得分:1)

import itertools

sentences = ["There is no subtle meaning in this.", "Don't analyze this!", "What is this sentence?"]
decomposedsentences = ((index, set(sentence.strip(".?!,").split(" "))) for index, sentence in enumerate(sentences))
s1,s2 = max(itertools.combinations(decomposedsentences, 2), key = lambda sentences: len(sentences[0][1]&sentences[1][1]))
print("The two sentences with the most common words", sentences[s1[0]], sentences[s2[0]])