我最近与Cython和Numba一起玩,以加速进行数值模拟的python小片段。首先,使用numba开发似乎更容易。但是,我发现很难理解numba何时会提供更好的性能,何时不提供更好的性能。
一个意外的性能下降示例是当我使用函数np.zeros()
在编译的函数中分配大数组时。例如,考虑三个函数定义:
import numpy as np
from numba import jit
def pure_python(n):
mat = np.zeros((n,n), dtype=np.double)
# do something
return mat.reshape((n**2))
@jit(nopython=True)
def pure_numba(n):
mat = np.zeros((n,n), dtype=np.double)
# do something
return mat.reshape((n**2))
def mixed_numba1(n):
return mixed_numba2(np.zeros((n,n)))
@jit(nopython=True)
def mixed_numba2(array):
n = len(array)
# do something
return array.reshape((n,n))
# To compile
pure_numba(10)
mixed_numba1(10)
由于#do something
为空,因此我不希望pure_numba
函数更快。但是,我没想到会有这样的性能下降:
n=10000
%timeit x = pure_python(n)
%timeit x = pure_numba(n)
%timeit x = mixed_numba1(n)
我获得了(python 3.7.7,在Mac上为numba 0.48.0)
4.96 µs ± 65.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
344 ms ± 7.76 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.8 µs ± 30.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
在这里,当我在已编译函数中使用函数np.zeros()
时,numba代码要慢得多。当np.zeros()
在函数外部时,它可以正常工作。
我在这里做错了吗?还是应该总是分配像numba编译的这些外部函数这样的大数组?
更新
这似乎与np.zeros((n,n))
足够大时n
对矩阵的惰性初始化有关(请参阅Performance of zeros function in Numpy)。
for n in [1000, 2000, 5000]:
print('n=',n)
%timeit x = pure_python(n)
%timeit x = pure_numba(n)
%timeit x = mixed_numba1(n)
给我:
n = 1000
468 µs ± 15.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
296 µs ± 6.55 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
300 µs ± 2.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
n = 2000
4.79 ms ± 182 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.45 ms ± 36 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.54 ms ± 127 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
n = 5000
270 µs ± 4.66 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
104 ms ± 599 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
119 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
答案 0 :(得分:3)
tl; dr Numpy使用C存储函数,而Numba必须分配零
我编写了一个脚本来绘制完成多个选项所花费的时间,当np.zeros
数组的大小在我的计算机上达到2048*2048*8 = 32 MB
时,Numba的性能似乎严重下降,因为如下图所示。
Numba实现np.zeros
的速度与创建一个空数组并通过遍历数组的尺寸填充零一样快(这是 Numba嵌套循环的绿色曲线该图)。实际上,可以通过在运行脚本之前设置NUMBA_DUMP_IR
环境变量来进行双重检查(请参见下文)。与numba_loop
的转储进行比较时,差异不大。
有趣的是,np.zeros
略高于32 MB阈值。
尽管我离专家还很远,但我的最佳猜测是32 MB的限制是操作系统或硬件的瓶颈,而瓶颈是来自同一进程的高速缓存中可以容纳的数据量。如果超出了该限制,则将数据移入和移出缓存以对其进行操作的操作非常耗时。
相反,Numpy使用calloc来获取某个内存段,并承诺在访问数据时用零填充数据。
这就是我所走的路,我意识到这只是答案的一半,但也许更博学的人可以对实际发生的事情有所了解。
Numba IR转储:
---------------------------IR DUMP: pure_numba_zeros----------------------------
label 0:
n = arg(0, name=n) ['n']
$2load_global.0 = global(np: <module 'numpy' from '/lib/python3.8/site-packages/numpy/__init__.py'>) ['$2load_global.0']
$4load_attr.1 = getattr(value=$2load_global.0, attr=zeros) ['$2load_global.0', '$4load_attr.1']
del $2load_global.0 []
$10build_tuple.4 = build_tuple(items=[Var(n, script.py:15), Var(n, script.py:15)]) ['$10build_tuple.4', 'n', 'n']
$12load_global.5 = global(np: <module 'numpy' from '/lib/python3.8/site-packages/numpy/__init__.py'>) ['$12load_global.5']
$14load_attr.6 = getattr(value=$12load_global.5, attr=double) ['$12load_global.5', '$14load_attr.6']
del $12load_global.5 []
$18call_function_kw.8 = call $4load_attr.1($10build_tuple.4, func=$4load_attr.1, args=[Var($10build_tuple.4, script.py:15)], kws=[('dtype', Var($14load_attr.6, script.py:15))], vararg=None) ['$10build_tuple.4', '$14load_attr.6', '$18call_function_kw.8', '$4load_attr.1']
del $4load_attr.1 []
del $14load_attr.6 []
del $10build_tuple.4 []
mat = $18call_function_kw.8 ['$18call_function_kw.8', 'mat']
del $18call_function_kw.8 []
$24load_method.10 = getattr(value=mat, attr=reshape) ['$24load_method.10', 'mat']
del mat []
$const28.12 = const(int, 2) ['$const28.12']
$30binary_power.13 = n ** $const28.12 ['$30binary_power.13', '$const28.12', 'n']
del n []
del $const28.12 []
$32call_method.14 = call $24load_method.10($30binary_power.13, func=$24load_method.10, args=[Var($30binary_power.13, script.py:16)], kws=(), vararg=None) ['$24load_method.10', '$30binary_power.13', '$32call_method.14']
del $30binary_power.13 []
del $24load_method.10 []
$34return_value.15 = cast(value=$32call_method.14) ['$32call_method.14', '$34return_value.15']
del $32call_method.14 []
return $34return_value.15 ['$34return_value.15']
生成图表的脚本:
import numpy as np
from numba import jit
from time import time
import os
import matplotlib.pyplot as plt
os.environ['NUMBA_DUMP_IR'] = '1'
def numpy_zeros(n):
mat = np.zeros((n,n), dtype=np.double)
return mat.reshape((n**2))
@jit(nopython=True)
def numba_zeros(n):
mat = np.zeros((n,n), dtype=np.double)
return mat.reshape((n**2))
@jit(nopython=True)
def numba_loop(n):
mat = np.empty((n * 2,n), dtype=np.float32)
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
mat[i, j] = 0.
return mat.reshape((2 * n**2))
# To compile
numba_zeros(10)
numba_loop(10)
os.environ['NUMBA_DUMP_IR'] = '0'
max_n = 4100
time_deltas = {
'numpy_zeros': [],
'numba_zeros': [],
'numba_loop': [],
}
call_count = 10
for n in range(0, max_n, 10):
for f in (numpy_zeros, numba_zeros, numba_loop):
start = time()
for i in range(call_count):
x = f(n)
delta = time() - start
time_deltas[f.__name__].append(delta / call_count)
print(f'{f.__name__:25} n = {n}: {delta}')
print()
size = np.arange(0, max_n, 10) ** 2 * 8 / 1024 ** 2
fig, ax = plt.subplots()
plt.xticks(np.arange(0, size[-1], 16))
plt.axvline(x=32, color='gray', lw=0.5)
ax.plot(size, time_deltas['numpy_zeros'], label='Numpy zeros (calloc)')
ax.plot(size, time_deltas['numba_zeros'], label='Numba zeros')
ax.plot(size, time_deltas['numba_loop'], label='Numba nested loop')
ax.set_xlabel('Size of array in MB')
ax.set_ylabel(r'Mean $\Delta$t in s')
plt.legend(loc='upper left')
plt.show()