我必须总结一个列表中的项的子集。我用来计算指数的数字在另一个列表中给出。索引可能会更改,但我正在汇总的列表不会更改。这两个列表的大小都是已知的。这部分代码位于我的程序中的嵌套循环中,是迄今为止最慢的部分。
我需要一种快速,干净的方法来在Python 3中执行此操作。
我尝试过的一个显而易见的解决方案是对所有不同项的总和进行硬编码。我还尝试了使用enumerate
并总结理解的更简洁的解决方案。问题在于后者要比前者慢得多。
要使用的确切索引是2 * i + x
,其中i
是indices
中的索引,x
是indices
中的数字。 (索引列表代表连接到查找表中的值之间的一组选择。)
# sample code - the real lists are much larger
lookup = [7, 10, 1, 4, 1, 7, 9, 3, 5, 6]
indices = [0, 1, 0, 0, 1]
# hardcoded solution
s = lookup[2 * 0 + indices[0]] + lookup[2 * 1 + indices[1]] + lookup[2 * 2 + indices[2]] + lookup[2 * 3 + indices[3]] + lookup[2 * 4 + indices[4]]
# Pythonic solution with enumerate
s = sum(lookup[2 * i + x] for i, x in enumerate(indices))
我已经使用perf测试了这两个选项。使用enumerate
的全新解决方案比硬编码版本慢3倍多。其余代码已进行了相当优化,因此,如果使用干净版本,则整个程序的速度将降低近3倍。
有什么我可以做的更快的吗?需要以某种方式预处理lookup
列表的答案是可以的:该列表仅构建一次,但使用了很多次。
编辑:这是一个完整示例,其中硬编码查找列表似乎比任何其他方法都快得多。以下代码在Pypy3中以0.27s运行,注释掉的慢版本在2.8s中运行。 (显然,执行此特定任务的方法更快。)
from itertools import product
lookup = [1, 7, 7, 1, 2, 9, 9, 9, 2, 2, 8, 8, 9, 6, 5, 10, 3, 4, 7, 10, 1, 3, 0, 1, 7, 1, 3, 4, 2, 9]
largest_sum = 0
largest_sum_indices = []
for indices in product(list(range(0,2)), repeat=15):
# simulate checking many different lookup lists
for _ in range(200):
s = lookup[2 * 0 + indices[0]] + lookup[2 * 1 + indices[1]] + lookup[2 * 2 + indices[2]] + lookup[2 * 3 + indices[3]] + lookup[2 * 4 + indices[4]] + lookup[2 * 5 + indices[5]] + lookup[2 * 6 + indices[6]] + lookup[2 * 7 + indices[7]] + lookup[2 * 8 + indices[8]] + lookup[2 * 9 + indices[9]] + lookup[2 * 10 + indices[10]] + lookup[2 * 11 + indices[11]] + lookup[2 * 12 + indices[12]] + lookup[2 * 13 + indices[13]] + lookup[2 * 14 + indices[14]]
# clean method is too slow
#s = sum(lookup[i * 2 + x] for i,x in enumerate(indices))
if s > largest_sum:
largest_sum = s
largest_sum_indices = indices
print(largest_sum)
print(largest_sum_indices)
答案 0 :(得分:1)
您可以使用函数itemgetter()
进行快速查找:
from operator import itemgetter
from itertools import count
lookup = [7, 10, 1, 4, 1, 7, 9, 3, 5, 6]
indices = [0, 1, 0, 0, 1]
sum(itemgetter(*[i + j for i, j in zip(count(step=2), indices)])(lookup))
# 27
答案 1 :(得分:0)
对于numpy来说,这似乎是一项不错的任务。它使您可以对正在执行的许多操作进行矢量化处理,并在后台运行C实现。
import numpy as np
lookup = np.array([7, 10, 1, 4, 1, 7, 9, 3, 5, 6], dtype=np.int)
indices = np.array([0, 1, 0, 0, 1], dtype=np.int)
real_indices = np.arange(0, 2 * indices.size, 2) + indices
s = lookup[real_indices].sum()
答案 2 :(得分:0)
I think that you really need to re-evaluate your algorithm instead of attempting to squeeze out cycles from innocent code if the edit is indicative of what you are really trying to do.
Let's start by describing in words what your code does. You have a lookup array of size 2N. You are assigning a bit (0 or 1) to indicate which element you will select from each successive pair, and add up the N selected elements. By going through every bit combination from 0 to 2**N-1, you hope to find the maximum sum of N elements.
I would posit that simply inspecting the N pairs of elements in a single pass will give you the correct sum and indices, if you still want those. You can do this in N steps, not N * 2**N.
Here is a really basic, totally unoptimized, solution that I bet will scale better than the one in the question:
lookup = ...
N = len(lookup) // 2
largest_sum = 0
largest_sum_indices = [0] * N
for i in range(N):
if lookup[2 * i + 1] > lookup[2 * i]:
largest_sum_indices[i] = 1
largest_sum += lookup[2 * i + 1]
else:
largest_sum += lookup[2 * i]
Notice that I don't call any functions besides len
and just use a basic for
loop.
Here is an optimized numpy version:
import numpy as np
lookup = np.array(...)
largest_sum_indices = np.argmax(lookup.reshape(lookup.size // 2, 2), axis=1)
largest_sum = lookup[2 * np.arange(largest_sum_indices.size) + largest_sum_indices]
While your own tests will show you which algorithm works best for you, keep in mind that either of the options here could handily process a few million elements without making the user too antsy, while somerhing that scales as O(N * 2**N) would take longer than the heat death of many universes. Any time you use product
, there is a good chance of a better solution being available.