我对此感到困惑
def main():
for i in xrange(2560000):
a = [0.0, 0.0, 0.0]
main()
$ time python test.py
real 0m0.793s
现在让我们看看numpy:
import numpy
def main():
for i in xrange(2560000):
a = numpy.array([0.0, 0.0, 0.0])
main()
$ time python test.py
real 0m39.338s
神圣的CPU循环蝙蝠侠!
使用numpy.zeros(3)
改进了,但仍然不够恕我直言
$ time python test.py
real 0m5.610s
user 0m5.449s
sys 0m0.070s
numpy.version.version =' 1.5.1'
如果您想知道在第一个示例中是否跳过列表创建以进行优化,则不是:
5 19 LOAD_CONST 2 (0.0)
22 LOAD_CONST 2 (0.0)
25 LOAD_CONST 2 (0.0)
28 BUILD_LIST 3
31 STORE_FAST 1 (a)
答案 0 :(得分:37)
Numpy针对大量数据进行了优化。给它一个很小的3长度数组,不出所料,它表现不佳。
考虑单独的测试
import timeit
reps = 100
pythonTest = timeit.Timer('a = [0.] * 1000000')
numpyTest = timeit.Timer('a = numpy.zeros(1000000)', setup='import numpy')
uninitialised = timeit.Timer('a = numpy.empty(1000000)', setup='import numpy')
# empty simply allocates the memory. Thus the initial contents of the array
# is random noise
print 'python list:', pythonTest.timeit(reps), 'seconds'
print 'numpy array:', numpyTest.timeit(reps), 'seconds'
print 'uninitialised array:', uninitialised.timeit(reps), 'seconds'
输出
python list: 1.22042918205 seconds
numpy array: 1.05412316322 seconds
uninitialised array: 0.0016028881073 seconds
似乎是数组的归零所占用的是numpy。因此,除非您需要初始化数组,否则请尝试使用空。
答案 1 :(得分:4)
Holy CPU cycles batman!
。
但请考虑与numpy
相关的非常基本的内容;复杂的基于线性代数的功能(如random numbers
或singular value decomposition
)。现在,考虑这些简洁的简单计算:
In []: A= rand(2560000, 3)
In []: %timeit rand(2560000, 3)
1 loops, best of 3: 296 ms per loop
In []: %timeit u, s, v= svd(A, full_matrices= False)
1 loops, best of 3: 571 ms per loop
请相信我,目前可用的任何套餐都不会对这种表现造成太大影响。
所以,请描述一下你真正的问题,我会尝试找出适合它的体面numpy
解决方案。
<强>更新强>
这是一些简单的射线球交叉代码:
import numpy as np
def mag(X):
# magnitude
return (X** 2).sum(0)** .5
def closest(R, c):
# closest point on ray to center and its distance
P= np.dot(c.T, R)* R
return P, mag(P- c)
def intersect(R, P, h, r):
# intersection of rays and sphere
return P- (h* (2* r- h))** .5* R
# set up
c, r= np.array([10, 10, 10])[:, None], 2. # center, radius
n= 5e5
R= np.random.rand(3, n) # some random rays in first octant
R= R/ mag(R) # normalized to unit length
# find rays which will intersect sphere
P, b= closest(R, c)
wi= b<= r
# and for those which will, find the intersection
X= intersect(R[:, wi], P[:, wi], r- b[wi], r)
显然我们正确计算了:
In []: allclose(mag(X- c), r)
Out[]: True
还有一些时间:
In []: % timeit P, b= closest(R, c)
10 loops, best of 3: 93.4 ms per loop
In []: n/ 0.0934
Out[]: 5353319 #=> more than 5 million detection's of possible intersections/ s
In []: %timeit X= intersect(R[:, wi], P[:, wi], r- b[wi])
10 loops, best of 3: 32.7 ms per loop
In []: X.shape[1]/ 0.0327
Out[]: 874037 #=> almost 1 million actual intersections/ s
这些时间是用非常适度的机器完成的。使用现代化的机器,仍然可以实现显着的加速。
无论如何,这只是一个简短的演示,如何用numpy
进行编码。
答案 2 :(得分:2)
迟到的答案,但对其他观众来说可能很重要。
kwant项目也考虑过这个问题。 实际上,小阵列没有在numpy中进行优化,而且很常见的小阵列正是你所需要的。
在这方面,他们创建了一个小数组的替代品,它与numpy数组的行为和共存(新数据类型中的任何未实现的操作都由numpy处理)。
你应该看看这个项目:
https://pypi.python.org/pypi/tinyarray/1.0.5
主要目的是为小数组表现得很好。当然,你不支持使用numpy做的一些更奇特的事情。但数字似乎是你的要求。
我做了一些小测试:
我添加了numpy导入以获得正确的加载时间
import numpy
def main():
for i in xrange(2560000):
a = [0.0, 0.0, 0.0]
main()
import numpy
def main():
for i in xrange(2560000):
a = numpy.array([0.0, 0.0, 0.0])
main()
import numpy
def main():
for i in xrange(2560000):
a = numpy.zeros((3,1))
main()
import numpy,tinyarray
def main():
for i in xrange(2560000):
a = tinyarray.array([0.0, 0.0, 0.0])
main()
import numpy,tinyarray
def main():
for i in xrange(2560000):
a = tinyarray.zeros((3,1))
main()
我跑了这个:
for f in python numpy numpy_zero tiny tiny_zero ; do
echo $f
for i in `seq 5` ; do
time python ${f}_test.py
done
done
得到了:
python
python ${f}_test.py 0.31s user 0.02s system 99% cpu 0.339 total
python ${f}_test.py 0.29s user 0.03s system 98% cpu 0.328 total
python ${f}_test.py 0.33s user 0.01s system 98% cpu 0.345 total
python ${f}_test.py 0.31s user 0.01s system 98% cpu 0.325 total
python ${f}_test.py 0.32s user 0.00s system 98% cpu 0.326 total
numpy
python ${f}_test.py 2.79s user 0.01s system 99% cpu 2.812 total
python ${f}_test.py 2.80s user 0.02s system 99% cpu 2.832 total
python ${f}_test.py 3.01s user 0.02s system 99% cpu 3.033 total
python ${f}_test.py 2.99s user 0.01s system 99% cpu 3.012 total
python ${f}_test.py 3.20s user 0.01s system 99% cpu 3.221 total
numpy_zero
python ${f}_test.py 1.04s user 0.02s system 99% cpu 1.075 total
python ${f}_test.py 1.08s user 0.02s system 99% cpu 1.106 total
python ${f}_test.py 1.04s user 0.02s system 99% cpu 1.065 total
python ${f}_test.py 1.03s user 0.02s system 99% cpu 1.059 total
python ${f}_test.py 1.05s user 0.01s system 99% cpu 1.064 total
tiny
python ${f}_test.py 0.93s user 0.02s system 99% cpu 0.955 total
python ${f}_test.py 0.98s user 0.01s system 99% cpu 0.993 total
python ${f}_test.py 0.93s user 0.02s system 99% cpu 0.953 total
python ${f}_test.py 0.92s user 0.02s system 99% cpu 0.944 total
python ${f}_test.py 0.96s user 0.01s system 99% cpu 0.978 total
tiny_zero
python ${f}_test.py 0.71s user 0.03s system 99% cpu 0.739 total
python ${f}_test.py 0.68s user 0.02s system 99% cpu 0.711 total
python ${f}_test.py 0.70s user 0.01s system 99% cpu 0.721 total
python ${f}_test.py 0.70s user 0.02s system 99% cpu 0.721 total
python ${f}_test.py 0.67s user 0.01s system 99% cpu 0.687 total
现在这些测试(已经指出)不是最好的测试。然而,他们仍然表明tinyarray更适合小阵列 另一个事实是,使用tinyarray最常见的操作应该更快。因此,与数据创建相比,它可能具有更好的使用效益。
我从未在完全成熟的项目中尝试过,但kwant项目正在使用它
答案 3 :(得分:0)
当然numpy会消耗更多时间,因为:a = np.array([0.0, 0.0, 0.0])
&lt; =〜=&gt; a = [0.0, 0.0, 0.0]; a = np.array(a)
,它采取了两个步骤。但是numpy-array有许多优点,它的高速可以在它们的操作中看到,而不是它们的创建。我个人想法的一部分:)。