Numba似乎是加速数字代码执行的绝佳解决方案。但是,当有数组赋值时,Numba似乎比标准Python代码慢。考虑这个例子,比较四个替代方案,有/无Numba,写入数组/标量:
(计算保持非常简单,专注于问题,即分配给标量与分配到数组单元格)
@autojit
def fast_sum_arr(arr):
z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
def sum_arr(arr):
z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
@autojit
def fast_sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
def sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
使用IPython的%timeit来评估我得到的四个替代方案:
In [125]: %timeit fast_sum_arr(arr)
100 loops, best of 3: 10.8 ms per loop
In [126]: %timeit sum_arr(arr)
100 loops, best of 3: 4.11 ms per loop
In [127]: %timeit fast_sum_sclr(arr)
100000 loops, best of 3: 10 us per loop
In [128]: %timeit sum_sclr(arr)
100 loops, best of 3: 2.93 ms per loop
sum_arr,其中不是使用Numba编译的速度是使用Numba编译的fast_sum_arr的两倍多。另一方面,使用Numba编译 的fast_sum_sclr比sum_sclr快两个数量级,而sum_sclr没有用Numba编译。
因此,Numba在加速sum_sclr方面的表现非常出色,但实际上使sum_arr执行得更慢。 sum_sclr和sum_arr之间的唯一区别是前者分配给标量而后者分配给数组单元格。
我不知道是否有任何关系,但我最近在博客http://www.phi-node.com/上阅读了以下内容:
“事实证明,当Numba遇到任何不直接支持的构造时,它会切换到(非常)慢的代码路径。”
博客作者使用if语句而不是Python的max()让Numba的执行速度更快。
对此有何见解?
谢谢,
FS
答案 0 :(得分:4)
这里的缓慢是arr.copy()函数,而不是对数组的写访问。证明:
# -*- coding: utf-8 -*-
from numba import autojit
from Timer import Timer
import numpy as np
@autojit
def fast_sum_arr(arr, z):
#z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
def sum_arr(arr, z):
#z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
@autojit
def fast_sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
def sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
if __name__ == '__main__':
vec1 = np.ones(1000)
z = vec1.copy()
with Timer() as t0:
for i in range(10000):
pass
print "time for empty loop ", t0.secs
print
with Timer() as t1:
for i in range(10000):
sum_arr(vec1, z)
print "time for sum_arr [µs]: ", (t1.secs-t0.secs) / 10000 * 1e6
with Timer() as t1:
for i in range(10000):
fast_sum_arr(vec1, z)
print "time for fast_sum_arr [µs]: ", (t1.secs-t0.secs) / 10000 * 1e6
with Timer() as t1:
for i in range(10000):
sum_sclr(vec1)
print "time for sum_arr [µs]: ", (t1.secs-t0.secs) / 10000 * 1e6
with Timer() as t1:
for i in range(10000):
fast_sum_sclr(vec1)
print "time for fast_sum_arr [µs]: ", (t1.secs-t0.secs) / 10000 * 1e6
"""
time for empty loop 0.000312089920044
time for sum_arr [µs]: 432.02688694
time for fast_sum_arr [µs]: 7.43598937988
time for sum_arr [µs]: 284.574580193
time for fast_sum_arr [µs]: 5.74610233307
"""
答案 1 :(得分:1)
我对numba了解不多,但是如果我们对它在幕后做的事情做了一些基本的假设,我们可以推断为什么autojit版本更慢,以及如何通过微小的改变加快它......
让我们从sum_arr开始,
1 def sum_arr(arr):
2 z = arr.copy()
3 M = len(arr)
4 for i in range(M):
5 z[i] += arr[i]
6
7 return z
很清楚这里发生了什么,但让我们选择第5行可以改写为
1 a = arr[i]
2 b = z[i]
3 c = a + b
4 z[i] = c
Python将进一步将其作为
1 a = arr.__getitem__(i)
2 b = arr.__getitem__(i)
3 c = a.__add__(b)
4 z.__setitem__(i, c)
a,b和c都是numpy.int64(或类似的)
的所有实例我怀疑numba正在尝试检查这些项的日期类型并将它们转换为一些numba本机数据类型(我看到numpy代码中最大的减速之一是无意中从python数据类型切换到numpy数据类型)。如果这确实是正在发生的事情,那么numba至少会进行3次转换,2 numpy.int64 - >本地的,1本地的 - > numpy.int64,或者可能更糟糕的中间体(numpy.int64 - > python int - > native(c int))。我怀疑numba会在检查数据类型时增加额外的开销,可能根本不会优化循环。让我们看看如果我们从循环中删除类型更改会发生什么......
1 @autojit
2 def fast_sum_arr2(arr):
3 z = arr.tolist()
4 M = len(arr)
5 for i in range(M):
6 z[i] += arr[i]
7
8 return numpy.array(z)
第3行的细微变化,tolist而不是copy,将数据类型更改为Python整数,但我们仍然有一个numpy.int64 - >在第6行原生。让我们重写为,z [i] + = z [i]
1 @autojit
2 def fast_sum_arr3(arr):
3 z = arr.tolist()
4 M = len(arr)
5 for i in range(M):
6 z[i] += z[i]
7
8 return numpy.array(z)
随着所有的变化,我们看到了相当大的加速(虽然它不一定超过纯python)。当然,arr + arr,只是愚蠢的快。
1 import numpy
2 from numba import autojit
3
4 def sum_arr(arr):
5 z = arr.copy()
6 M = len(arr)
7 for i in range(M):
8 z[i] += arr[i]
9
10 return z
11
12 @autojit
13 def fast_sum_arr(arr):
14 z = arr.copy()
15 M = len(arr)
16 for i in range(M):
17 z[i] += arr[i]
18
19 return z
20
21 def sum_arr2(arr):
22 z = arr.tolist()
23 M = len(arr)
24 for i in range(M):
25 z[i] += arr[i]
26
27 return numpy.array(z)
28
29 @autojit
30 def fast_sum_arr2(arr):
31 z = arr.tolist()
32 M = len(arr)
33 for i in range(M):
34 z[i] += arr[i]
35
36 return numpy.array(z)
37
38 def sum_arr3(arr):
39 z = arr.tolist()
40 M = len(arr)
41 for i in range(M):
42 z[i] += z[i]
43
44 return numpy.array(z)
45
46 @autojit
47 def fast_sum_arr3(arr):
48 z = arr.tolist()
49 M = len(arr)
50 for i in range(M):
51 z[i] += z[i]
52
53 return numpy.array(z)
54
55 def sum_arr4(arr):
56 return arr+arr
57
58 @autojit
59 def fast_sum_arr4(arr):
60 return arr+arr
61
62 arr = numpy.arange(1000)
和时间,
In [1]: %timeit sum_arr(arr)
10000 loops, best of 3: 129 us per loop
In [2]: %timeit sum_arr2(arr)
1000 loops, best of 3: 232 us per loop
In [3]: %timeit sum_arr3(arr)
10000 loops, best of 3: 51.8 us per loop
In [4]: %timeit sum_arr4(arr)
100000 loops, best of 3: 3.68 us per loop
In [5]: %timeit fast_sum_arr(arr)
1000 loops, best of 3: 216 us per loop
In [6]: %timeit fast_sum_arr2(arr)
10000 loops, best of 3: 65.6 us per loop
In [7]: %timeit fast_sum_arr3(arr)
10000 loops, best of 3: 56.5 us per loop
In [8]: %timeit fast_sum_arr4(arr)
100000 loops, best of 3: 2.03 us per loop
答案 2 :(得分:1)
是的,Numba使用延迟初始化,所以第二次调用它时会更快。 对于大型阵列,尽管懒惰的初始化,numba仍然优于no-numba。
尝试以下取消注释不同的b
import time
import numpy as np
from numba import jit, autojit
@autojit
def fast_sum_arr(arr):
z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
def sum_arr(arr):
z = arr.copy()
M = len(arr)
for i in range(M):
z[i] += arr[i]
return z
@autojit
def fast_sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
def sum_sclr(arr):
z = 0
M = len(arr)
for i in range(M):
z += arr[i]
return z
b = np.arange(100)
# b = np.arange(1000000)
# b = np.arange(100000000)
print('Vector of len {}\n'.format(len(b)))
print('Sum ARR:\n')
time1 = time.time()
sum_arr(b)
time2 = time.time()
print('No numba: {}'.format(time2 - time1))
time1 = time.time()
fast_sum_arr(b)
time2 = time.time()
print('Numba first time: {}'.format(time2 - time1))
time1 = time.time()
fast_sum_arr(b)
time2 = time.time()
print('Numba second time: {}'.format(time2 - time1))
print('\nSum SCLR:\n')
time1 = time.time()
sum_sclr(b)
time2 = time.time()
print('No numba: {}'.format(time2 - time1))
time1 = time.time()
fast_sum_sclr(b)
time2 = time.time()
print('Numba first time: {}'.format(time2 - time1))
time1 = time.time()
fast_sum_sclr(b)
time2 = time.time()
print('Numba second time: {}'.format(time2 - time1))
在我的python 3系统上,numba 0.34.0得到
"""
Vector of len 100
Sum ARR:
No numba: 7.414817810058594e-05
Numba first time: 0.07130813598632812
Numba second time: 3.814697265625e-06
Sum SCLR:
No numba: 2.6941299438476562e-05
Numba first time: 0.05761408805847168
Numba second time: 1.4066696166992188e-05
"""
和
"""
Vector of len 1000000
Sum ARR:
No numba: 0.3144559860229492
Numba first time: 0.07181787490844727
Numba second time: 0.0014197826385498047
Sum SCLR:
No numba: 0.15929198265075684
Numba first time: 0.05956888198852539
Numba second time: 0.00037789344787597656
"""
和
"""
Vector of len 100000000
Sum ARR:
No numba: 30.345629930496216
Numba first time: 0.7232880592346191
Numba second time: 0.586756706237793
Sum SCLR:
No numba: 16.271318912506104
Numba first time: 0.11036324501037598
Numba second time: 0.06010794639587402
"""
有趣的是,第一次调用和第二次调用之间的计算时间差异减小了数组大小。我不知道为什么它会像那样工作。