我目前正在使用python中的建模环境,它使用dicts来共享连接部分的连接属性。我目前这样做的方式大约占我总计划运行时间的15-20%,这相当于几百万次迭代......
因此,我发现自己正在研究如何加快更新dicts中的多个值并从dicts中获取多个值。
我的示例dict看起来像这样(键值对的数量预计将保持在300到1000的当前范围内,因此我将其填充到此数量):
val_dict = {'a': 5.0, 'b': 18.8, 'c': -55/2}
for i in range(200):
val_dict[str(i)] = i
val_dict[i] = i**2
keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10)
length = new_values.shape[0]
虽然keys
和new_values
的形状以及val_dict
中的键值对的数量将始终保持不变,但的值> new_values
在每次迭代时更改,因此必须在每次迭代时更新(并且在每次迭代时从我的代码的另一部分中检索)。
我使用itemgetter
模块中的operator
来计算几种方法,其中来自dicts的获取多个值似乎是最快的。我可以在迭代开始之前定义getter
,因为所需的变量是常量:
getter = itemgetter(*keys)
%timeit getter(val_dict)
The slowest run took 10.45 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 140 ns per loop
我想这很好,还是有更快的东西?
但是当通过屏蔽将这些值分配给numpy数组时,它会慢慢减速:
result = np.ones(25)
idx = np.array((0, 5, 8, -1))
def getter_fun(result, idx, getter, val_dict):
result[idx] = getter(val_dict)
%timeit getter_fun(result, idx, getter, new_values)
The slowest run took 11.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.77 µs per loop
有什么办法可以改善吗?我想元组解包是这里最糟糕的部分......
对于设置多个值我有几种方法可以做到这一点:一个解包值的函数,一个使用给定键值对的更新的函数,一个函数使用for-loop,dict理解和生成器函数。
def unpack_putter(val_dict, keys, new_values):
(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values
%timeit unpack_putter(val_dict, keys, new_values)
The slowest run took 8.85 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.29 µs per loop
def upd_putter(val_dict, keys, new_values):
val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})
%timeit upd_putter(val_dict, keys, new_values)
The slowest run took 15.22 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 963 ns per loop
def for_putter(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]
%timeit for_putter(val_dict, keys, new_values, length)
The slowest run took 12.31 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.14 µs per loop
def dictcomp_putter(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})
%timeit dictcomp_putter(val_dict, keys, new_values, length)
The slowest run took 7.13 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.69 µs per loop
def gen_putter(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))
%timeit gen_putter(val_dict, keys, new_values, length)
The slowest run took 10.03 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.54 µs per loop
upd_putter
将是最快的,但我可以以keys
和new_values
的交替形状使用它(它们在迭代期间仍然是常量,但每个部分都被考虑具有不同数量的更新密钥,必须由用户输入确定。有趣的是,for循环对我来说似乎很好。所以我想我做错了,必须是一种更快的方法。
最后要考虑的事情是:我很快就会使用Cython,所以我猜这会让for循环变得有利吗?或者我可以使用joblib
并行化for循环。我还想过使用numba
,但是我必须摆脱所有的决定......
希望你能帮我解决这个问题。
编辑MSeifert (即使我不确定您的意思是否相同):
tuplelist = list()
for i in range(200):
tuplelist.append(i)
tuplelist.append(str(i))
keys_long = tuple(tuplelist)
new_values_long = np.arange(0,400)
%timeit for_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 73.5 µs per loop
%timeit dictcomp_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 96.4 µs per loop
%timeit gen_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 129 µs per loop
答案 0 :(得分:4)
让我们立即关注与性能无关的两个非常重要的事情:可维护性和可扩展性。
手动编制索引的前两种方法:
(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values
和
val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})
硬编码(维护噩梦)您插入的元素数量,因此这些方法不能很好地扩展。因此,我不会在答案的其余部分中包含它们。我并不是说这些都不好 - 它们只是不能很好地扩展,并且很难比较仅适用于特定数量条目的函数的时序。
首先让我提出另外两种基于zip
的方法(如果你正在使用python-2.x,请使用itertools.izip
):
def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))
def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val
这将是"最pythonic"解决这个问题的方法(至少在我看来)。
我还将new_values
更改为列表,因为迭代NumPy数组比将数组转换为列表然后迭代列表更糟糕如果您对细节感兴趣,我详细说明了那部分在另一个answer。
让我们看看这些方法的表现如何:
import numpy as np
def old_for(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]
def old_update_comp(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})
def old_update_gen(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))
def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))
def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10).tolist()
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 4.1 µs ± 183 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 9.56 µs ± 180 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 17 µs ± 332 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 5.92 µs ± 123 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 3.23 µs ± 84.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
有更多的键和值:
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = range(1000)
new_values = range(1000)
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 1.08 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 1.08 ms ± 13.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 1.44 ms ± 31.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 242 µs ± 3.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 346 µs ± 8.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
因此,对于更大的输入,我的方法似乎比您的方法快得多(2-5倍)。
您可以尝试使用Cython来改进您的方法,不幸的是Cython不支持cdef
或cpdef
函数中的理解,因此我只对其他方法进行了cython化:
%load_ext cython
%%cython
cpdef new1_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
val_dict.update(zip(keys, new_values.tolist()))
cpdef new2_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
for key, val in zip(keys, new_values.tolist()):
val_dict[key] = val
cpdef new3_cy(dict val_dict, tuple keys, int[:] new_values, Py_ssize_t length):
cdef Py_ssize_t i
for i in range(length):
val_dict[keys[i]] = new_values[i]
这次我将keys
tuple
和new_values
作为NumPy数组,以便它们使用已定义的Cython函数:
import numpy as np
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(4))
new_values = np.arange(4)
length = len(new_values)
%timeit new1(val_dict, keys, new_values, length)
# 7.88 µs ± 317 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 4.4 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 5.51 µs ± 56.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(1000))
new_values = np.arange(1000)
length = len(new_values)
%timeit new1_cy(val_dict, keys, new_values, length)
# 208 µs ± 9.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 231 µs ± 13.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new3_cy(val_dict, keys, new_values, length)
# 156 µs ± 4.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
因此,如果您有一个元组和一个numpy数组,那么使用正常索引和内存视图new3_cy
的函数可以实现几乎2倍的加速。至少如果你有很多需要插入的键值对。
注意我没有从dict获取多个值,因为operator.itemgetter
可能是最好的方法。