Python中字符串的排序与自定义散列函数的性能比较

时间:2012-11-23 01:54:54

标签: python hash-function

我正在比较排序与自定义散列函数对不同长度的字符串的性能,结果有点令人惊讶。我期望以下代码中的函数prime_hash(尤其是prime_hash2)优于sort_hash,尽管反之亦然。任何人都可以解释性能差异吗?任何人都可以提供备用哈希吗? [该函数应为包含相同字母分布和所有其他字符串的不同值的字符串生成相同的值]。

以下是我得到的结果:

For strings of max length: 10
sort_hash: Time in seconds: 3.62555098534
prime_hash: Time in seconds: 5.5846118927
prime_hash2: Time in seconds: 4.14513611794
For strings of max length: 20
sort_hash: Time in seconds: 4.51260590553
prime_hash: Time in seconds: 8.87842392921
prime_hash2: Time in seconds: 5.74179887772
For strings of max length: 30
sort_hash: Time in seconds: 5.41446709633
prime_hash: Time in seconds: 11.4799649715
prime_hash2: Time in seconds: 7.58586287498
For strings of max length: 40
sort_hash: Time in seconds: 6.42467713356
prime_hash: Time in seconds: 14.397785902
prime_hash2: Time in seconds: 9.58741497993
For strings of max length: 50
sort_hash: Time in seconds: 7.25647807121
prime_hash: Time in seconds: 17.0482890606
prime_hash2: Time in seconds: 11.3653149605

以下是代码:

#!/usr/bin/env python

from time import time
import random
import string
from itertools import groupby

def prime(i, primes):
   for prime in primes:
      if not (i == prime or i % prime):
         return False
   primes.add(i)
   return i

def historic(n):
   primes = set([2])
   i, p = 2, 0
   while True:
      if prime(i, primes):
         p += 1
         if p == n:
            return primes
      i += 1

primes = list(historic(26))

def generate_strings(num, max_len):
   gen_string = lambda i: ''.join(random.choice(string.lowercase) for x in xrange(i))
   return [gen_string(random.randrange(3, max_len)) for i in xrange(num)]

def sort_hash(s):
   return ''.join(sorted(s))

def prime_hash(s):
   return reduce(lambda x, y: x * y, [primes[ord(c) - ord('a')] for c in s])

def prime_hash2(s):
   res = 1
   for c in s:
      res = res * primes[ord(c) - ord('a')]
   return res

def dotime(func, inputs):
   start = time()
   groupby(sorted([func(s) for s in inputs]))
   print '%s: Time in seconds: %s' % (func.__name__, str(time() - start))

def dotimes(inputs):
   print 'For strings of max length: %s' % max([len(s) for s in inputs])
   dotime(sort_hash, inputs)
   dotime(prime_hash, inputs)
   dotime(prime_hash2, inputs)

if __name__ == '__main__':
   dotimes(generate_strings(1000000, 11))
   dotimes(generate_strings(1000000, 21))
   dotimes(generate_strings(1000000, 31))
   dotimes(generate_strings(1000000, 41))
   dotimes(generate_strings(1000000, 51))

2 个答案:

答案 0 :(得分:2)

我想你在问为什么sort_hash(O(n * log n))比其他O(n)函数更快。

原因是您的n太小而log(n)无法显着。

Python的sort()用C编码。如果你用C语言编写算术函数,你会看到n*log(n)输出n

的更小值

除此之外:当你有很多重复的项目时,timsort将优于n*log(n)。由于只有256个字符,你很可能会发现timsort在字符串足够长以便看到算术版本获得优势之前很久就会接近O(n)

答案 1 :(得分:0)

基于BoppreH的输入,我能够得到一个基于算术的哈希版本,甚至优于C实现的“基于排序”的版本:

primes = list(historic(26))
primes = {c : primes[ord(c) - ord('a')] for c in string.lowercase}

def prime_hash2(s):
   res = 1
   for c in s:
      res = res * primes[c]
   return res