有效地建立具有给定汉明距离的单词图

时间:2015-06-28 13:57:19

标签: python algorithm graph-algorithm hamming-distance

我想用Hamming distance(例如)1的单词列表构建图表,或者换句话说,如果两个单词只有一个字母不同,则会连接两个单词( lo l - > lo t )。

所以给定

words = [ lol, lot, bot ]

图表将是

{
  'lol' : [ 'lot' ],
  'lot' : [ 'lol', 'bot' ],
  'bot' : [ 'lot' ]
}

简单的方法是将列表中的每个单词与其他单词进行比较并计算不同的字符数;遗憾的是,这是O(N^2)算法。

我可以使用哪种algo / ds /策略来获得更好的性能?

另外,我们假设只有拉丁字符,并且所有单词都有相同的长度。

4 个答案:

答案 0 :(得分:21)

假设您将字典存储在set()中,以便lookup is O(1) in the average (worst case O(n))

您可以从一个单词生成汉明距离1处的所有有效单词:

>>> def neighbours(word):
...     for j in range(len(word)):
...         for d in string.ascii_lowercase:
...             word1 = ''.join(d if i==j else c for i,c in enumerate(word))
...             if word1 != word and word1 in words: yield word1
...
>>> {word: list(neighbours(word)) for word in words}
{'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']}

如果 M 是单词的长度, L 字母表的长度(即26),最差情况的时间复杂度用这种方法找出相邻的词是 O(L * M * N)

“简单方法”方法的时间复杂度为 O(N ^ 2)

这种方法何时更好?当L*M < N时,即仅考虑小写字母时M < N/26。 (我在这里只考虑了最坏的情况)

注意:the average length of an english word is 5.1 letters。因此,如果你的字典大小超过132个单词,你应该考虑这种方法。

可能有可能获得比这更好的性能。然而,这实现起来非常简单。

实验基准:

“简单方法”算法( A1 ):

from itertools import zip_longest
def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2))
def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words}

此算法( A2 ):

def graph2(words): return {word: list(neighbours(word)) for word in words}

基准代码:

for dict_size in range(100,6000,100):
    words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)])
    t1 = Timer(lambda: graph1()).timeit(10)
    t2 = Timer(lambda: graph2()).timeit(10)
    print('%d,%f,%f' % (dict_size,t1,t2))

输出:

100,0.119276,0.136940
200,0.459325,0.233766
300,0.958735,0.325848
400,1.706914,0.446965
500,2.744136,0.545569
600,3.748029,0.682245
700,5.443656,0.773449
800,6.773326,0.874296
900,8.535195,0.996929
1000,10.445875,1.126241
1100,12.510936,1.179570
...

data plot

我用N的较小步骤运行了另一个基准测试,以便更接近它:

10,0.002243,0.026343
20,0.010982,0.070572
30,0.023949,0.073169
40,0.035697,0.090908
50,0.057658,0.114725
60,0.079863,0.135462
70,0.107428,0.159410
80,0.142211,0.176512
90,0.182526,0.210243
100,0.217721,0.218544
110,0.268710,0.256711
120,0.334201,0.268040
130,0.383052,0.291999
140,0.427078,0.312975
150,0.501833,0.338531
160,0.637434,0.355136
170,0.635296,0.369626
180,0.698631,0.400146
190,0.904568,0.444710
200,1.024610,0.486549
210,1.008412,0.459280
220,1.056356,0.501408
...

data plot 2

您会看到权衡非常低(长度为3的单词词典为100)。对于小词典,O(N ^ 2)算法可以更好地执行 ,但随着N的增长,这很容易被O(LMN)算法击败。

对于字数较长的字典,O(LMN)算法在N中保持线性,它只有不同的斜率,因此权衡略微向右移动(长度为130 = 5)。

答案 1 :(得分:6)

没有必要依赖于字母大小。例如,给定单词?ot, b?t, bo?,将其插入到密钥import collections d = collections.defaultdict(list) with open('/usr/share/dict/words') as f: for line in f: for word in line.split(): if len(word) == 6: for i in range(len(word)): d[word[:i] + ' ' + word[i + 1:]].append(word) pairs = [(word1, word2) for s in d.values() for word1 in s for word2 in s if word1 < word2] print(len(pairs)) 下的单词列表字典中。然后,对于每个单词列表,连接所有对。

stopwords = ['word1', 'word2', 'word3']
sentence = "Word1 Word5 word2 Word4 wORD3"

reduced = sentence.split()

for i in reduced:
    if i.lower() in stopwords:
        reduced.remove(i)

答案 2 :(得分:5)

Ternary Search Trie支持近邻搜索。

如果您的词典存储为TST,那么,我相信,在构建图表时,查找的平均复杂度将接近真实世界词典中的O(N*log(N))

然后检查Efficient auto-complete with a ternary search tree article

答案 3 :(得分:1)

这是线性O(N)算法,但具有大常数因子(R * L * 2)。 R是基数(拉丁字母表是26)。 L是一个中等长度的单词。 2是添加/替换通配符的因素。所以abc和aac和abca是两个操作,导致汉明距离为1。

它是用Ruby编写的。而对于240k字,它需要~250Mb RAM和平均硬件136秒

图表实施蓝图

class Node
  attr_reader :val, :edges

  def initialize(val)
    @val = val
    @edges = {}
  end

  def <<(node)
    @edges[node.val] ||= true
  end

  def connected?(node)
    @edges[node.val]
  end

  def inspect
    "Val: #{@val}, edges: #{@edges.keys * ', '}"
  end
end

class Graph
  attr_reader :vertices
  def initialize
    @vertices = {}
  end

  def <<(val)
    @vertices[val] = Node.new(val)
  end

  def connect(node1, node2)
    # print "connecting #{size} #{node1.val}, #{node2.val}\r"
    node1 << node2
    node2 << node1
  end

  def each
    @vertices.each do |val, node|
      yield [val, node]
    end
  end

  def get(val)
    @vertices[val]
  end
end

算法本身

CHARACTERS = ('a'..'z').to_a
graph = Graph.new

# ~ 240 000 words
File.read("/usr/share/dict/words").each_line.each do |word|
  word = word.chomp
  graph << word.downcase
end

graph.each do |val, node|
  CHARACTERS.each do |char|
    i = 0
    while i <= val.size
      node2 = graph.get(val[0, i] + char + val[i..-1])
      graph.connect(node, node2) if node2
      if i < val.size
        node2 = graph.get(val[0, i] + char + val[i+1..-1])
        graph.connect(node, node2) if node2
      end
      i += 1
    end
  end
end