滥用产量以避免循环中的条件

时间:2010-11-15 22:46:54

标签: python performance generator yield

我需要搜索其他东西中的第一个,最后一个,任何一个或所有出现的东西。为避免重复自己(DRY),我提出了以下解决方案。

感兴趣的是两个search_revisions()类的方法collect_one_occurence()Searcher

SearcherYield中,我在search_revisions()中创建了一个生成器,仅在收集第一个结果后放弃collect_one_occurence()中的生成器。在SearcherCondition中,我在循环中放入了一个条件。必须在循环的每次迭代中检查此条件。

我无法决定我的(ab)使用收益率以及随后放弃发电机是否是天才或骇人听闻的罢工。你怎么看?对于这种情况你还有其他想法吗?

#!/usr/bin/python

class Revision:
  # a revision is something like a textfile.
  # the search() method will search the textfile
  # and return the lines which match the given pattern.
  # for demonstration purposes this class is simplified
  # to return predefined results
  def __init__(self, results):
    self.results = results
  def search(self, pattern):
    return self.results

class AbstractSearcher:
  def __init__(self, revisions):
    self.revisions = revisions
  def search_for_first_occurence(self, pattern):
    keys = sorted(self.revisions.iterkeys())
    return self.collect_one_occurence(keys, pattern)
  def search_for_last_occurence(self, pattern):
    keys = sorted(self.revisions.iterkeys(), reverse = True)
    return self.collect_one_occurence(keys, pattern)
  def search_for_any_occurence(self, pattern):
    keys = self.revisions.iterkeys()
    return self.collect_one_occurence(keys, pattern)
  def search_for_all_occurences(self, pattern):
    keys = self.revisions.iterkeys()
    return self.collect_all_occurences(keys, pattern)

class SearcherYield(AbstractSearcher):

  def search_revisions(self, keys, pattern):
    # create generator which yields the results one by one
    for key in keys:
      rev = self.revisions[key]
      result = rev.search(pattern)
      if result:
        yield result

  def collect_one_occurence(self, keys, pattern):
    # take the first result and then abandon the generator
    for result in self.search_revisions(keys, pattern):
      return result
    return []

  def collect_all_occurences(self, keys, pattern):
    # collect all results from generator
    results = []
    for result in self.search_revisions(keys, pattern):
      results.extend(result)
    return results

class SearcherCondition(AbstractSearcher):

  def search_revisions(self, keys, pattern, just_one):
    # collect either all results from all revisions
    # or break the loop after first result found
    results = []
    for key in keys:
      rev = self.revisions[key]
      result = rev.search(pattern)
      if result:
        results.extend(result)
        if just_one:
          break
    return results

  def collect_one_occurence(self, keys, pattern):
    return self.search_revisions(keys, pattern, just_one = True)

  def collect_all_occurences(self, keys, pattern):
    return self.search_revisions(keys, pattern, just_one = False)

def demo(searcher):
  print searcher.__class__.__name__
  print 'first:', searcher.search_for_first_occurence('foo')
  print 'last: ', searcher.search_for_last_occurence('foo')
  print 'any:  ', searcher.search_for_any_occurence('foo')
  print 'all:  ', searcher.search_for_all_occurences('foo')

def main():
  revisions = {
        1: Revision([]),
        2: Revision(['a', 'b']),
        3: Revision(['c']),
        4: Revision(['d','e', 'f']),
        5: Revision([])}
  demo(SearcherYield(revisions))
  demo(SearcherCondition(revisions))

if __name__ == '__main__':
  main()

某些上下文:修订基本上是文本文件。您可以将它们视为维基页面的修订版。通常有数百个修订,有时数千个。每个修订版包含多达数千行文本。还有一些情况,只有几个版本,每个版本都有几行。

在修订中搜索将在文本中搜索模式并返回匹配的行。有时会有数千个结果,有时候没有结果。

有时我只需知道任何修订版本中是否有任何结果(搜索任何修订版本)。有时我必须收集所有结果以便进一步处理(搜索所有)。有时我只需要匹配的第一个修订版,有时只需要最后修订版(搜索第一个和最后一个修订版)。

3 个答案:

答案 0 :(得分:2)

我做了一个基准测试。结果如下:

$ ./benchmark.py 
benchmark with revcount: 1000 timeitcount: 1000
last, first, yield: 0.902059793472
last, first,  cond: 0.897155046463
last,   all, yield: 0.818709135056
last,   all,  cond: 0.818334102631
 all,   all, yield: 1.26602506638
 all,   all,  cond: 1.17208003998
benchmark with revcount: 2000 timeitcount: 1000
last, first, yield: 1.80768609047
last, first,  cond: 1.84234118462
last,   all, yield: 1.64661192894
last,   all,  cond: 1.67588806152
 all,   all, yield: 2.55621600151
 all,   all,  cond: 2.37582707405
benchmark with revcount: 10000 timeitcount: 1000
last, first, yield: 9.34304785728
last, first,  cond: 9.33725094795
last,   all, yield: 8.4673140049
last,   all,  cond: 8.49153590202
 all,   all, yield: 12.9636368752
 all,   all,  cond: 11.780673027

产量和条件解决方案显示非常相似的时间。我认为这是因为生成器(yield)有一个带有条件的循环(如果不是空的或类似的东西)。我以为我避开了循环中的情况,但我只是把它移出了视线。

无论如何,这些数字表明性能基本相同,因此应根据可读性来判断代码。我会坚持循环中的条件。我喜欢明确的。

以下是基准代码:

#!/usr/bin/python

import functools
import timeit

class Revision:
  # a revision is something like a textfile.
  # the search() method will search the textfile
  # and return the lines which match the given pattern.
  # for demonstration purposes this class is simplified
  # to return predefined results
  def __init__(self, results):
    self.results = results
  def search(self, pattern):
    return self.results

class AbstractSearcher:
  def __init__(self, revisions):
    self.revisions = revisions
  def search_for_first_occurence(self, pattern):
    keys = sorted(self.revisions.iterkeys())
    return self.collect_one_occurence(keys, pattern)
  def search_for_last_occurence(self, pattern):
    keys = sorted(self.revisions.iterkeys(), reverse = True)
    return self.collect_one_occurence(keys, pattern)
  def search_for_any_occurence(self, pattern):
    keys = self.revisions.iterkeys()
    return self.collect_one_occurence(keys, pattern)
  def search_for_all_occurences(self, pattern):
    keys = self.revisions.iterkeys()
    return self.collect_all_occurences(keys, pattern)

class SearcherYield(AbstractSearcher):

  def search_revisions(self, keys, pattern):
    # create generator which yields the results one by one
    for key in keys:
      rev = self.revisions[key]
      result = rev.search(pattern)
      if result:
        yield result

  def collect_one_occurence(self, keys, pattern):
    # take the first result and then abandon the generator
    for result in self.search_revisions(keys, pattern):
      return result
    return []

  def collect_all_occurences(self, keys, pattern):
    # collect all results from generator
    results = []
    for result in self.search_revisions(keys, pattern):
      results.extend(result)
    return results

class SearcherCondition(AbstractSearcher):

  def search_revisions(self, keys, pattern, just_one):
    # collect either all results from all revisions
    # or break the loop after first result found
    results = []
    for key in keys:
      rev = self.revisions[key]
      result = rev.search(pattern)
      if result:
        results.extend(result)
        if just_one:
          break
    return results

  def collect_one_occurence(self, keys, pattern):
    return self.search_revisions(keys, pattern, just_one = True)

  def collect_all_occurences(self, keys, pattern):
    return self.search_revisions(keys, pattern, just_one = False)

def benchmark(revcount, timeitcount):

  lastrev = {}
  for i in range(revcount):
    lastrev[i] = Revision([])
  lastrev[revcount] = Revision([1])

  allrevs = {}
  for i in range(revcount):
    allrevs[i] = Revision([1])

  last_yield = SearcherYield(lastrev)
  last_cond = SearcherCondition(lastrev)
  all_yield = SearcherYield(allrevs)
  all_cond = SearcherCondition(allrevs)

  lfy = functools.partial(last_yield.search_for_first_occurence, 'foo')
  lfc = functools.partial(last_cond.search_for_first_occurence, 'foo')
  lay = functools.partial(last_yield.search_for_all_occurences, 'foo')
  lac = functools.partial(last_cond.search_for_all_occurences, 'foo')
  aay = functools.partial(all_yield.search_for_all_occurences, 'foo')
  aac = functools.partial(all_cond.search_for_all_occurences, 'foo')

  print 'benchmark with revcount: %d timeitcount: %d' % (revcount, timeitcount)
  print 'last, first, yield:', timeit.timeit(lfy, number = timeitcount)
  print 'last, first,  cond:', timeit.timeit(lfc, number = timeitcount)
  print 'last,   all, yield:', timeit.timeit(lay, number = timeitcount)
  print 'last,   all,  cond:', timeit.timeit(lac, number = timeitcount)
  print ' all,   all, yield:', timeit.timeit(aay, number = timeitcount)
  print ' all,   all,  cond:', timeit.timeit(aac, number = timeitcount)

def main():
  timeitcount = 1000
  benchmark(1000, timeitcount)
  benchmark(2000, timeitcount)
  benchmark(10000, timeitcount)

if __name__ == '__main__':
  main()

有关我系统的一些信息:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.1 LTS
Release:    10.04
Codename:   lucid
$ uname -a
Linux lesmana-laptop 2.6.32-26-generic #46-Ubuntu SMP Tue Oct 26 16:46:46 UTC 2010 i686 GNU/Linux
$ python --version
Python 2.6.5
$ cat /proc/cpuinfo | grep name
model name  : Intel(R) Pentium(R) M processor 1.60GHz

答案 1 :(得分:0)

如果lookup_item是不可变的并且collec是任何有序集合,这将解决您的问题:

positions = [i for i, item in enumerate(collec) if item==lookup_item]

它实际上将返回在collec中发生lookup_item的所有位置。

答案 2 :(得分:0)

就可读性而言,我个人赞成收益率,但这是一个非常接近的电话。除了我认为它是一个很好的代码构造并且适用于许多情况之外,我真的没有太多理由。

您可能已经知道这一点,但代码将需要返回给调用者的匹配修订版。修改最少代码的方法是在修订版搜索方法返回结果时将链接返回到修订版。

您可以通过将python itertools模块与yield结合使用来减少代码。可读性可以说是垃圾,但它非常棒极客:

from itertools import chain,repeat,islice,ifilter
def collect_one_occurence(self, keys, pattern):
    return chain(ifilter(None,(rev.search(pattern) for rev in (self.revisions[key] for key in keys)),repeat([]).next()

def collect_all_occurences(self, keys, pattern):
    return list(chain(*[rev.search(pattern) for rev in (self.revisions[key] for key in keys)]))

显然,您可以扩展代码以使其更具可读性但我将其折叠以进行基准测试...想知道这会改善您当前的结果吗?