我正在尝试分析一些数据,为此我需要计算一个涉及两倍总和的数量。 python代码如下所示:
import numpy as np
tmax = 1000
array = np.random.rand(tmax)
tstart = 500
meanA = np.mean(array[tstart:])
quantity = np.zeros(tmax-tstart)
for f in range(1,tmax-tstart,1):
count = 0
integrand = 0
for ff in range(tstart+1,tmax-f):
count += 1
dAt = array[ff] - meanA
dAtt = array[ff:ff+f+1] - meanA
integrand += np.sum(dAt * dAtt)
if count != 0:
integrand /= f*count
quantity[f] = integrand
这大约需要1.5s
才能运行。这比MATLAB进行相同计算所需的时间多10倍:
tic;
tmax = 1000;
tstart = 500;
array = rand(1,tmax);
meanA = mean(array(tstart:tmax));
quantity = zeros(1,tmax);
for f=1:tmax-tstart
integrand = 0;
count = 0;
for ff=tstart:tmax-f
count = count + 1;
dAt = array(ff)-meanA;
dAtt = array(ff:ff+f)-meanA;
integrand = integrand + sum(dAt*dAtt);
end
integrand = integrand/(f*count);
autocorr(f) = integrand;
end
toc
输出:
>> speedTest
Elapsed time is 0.096789 seconds.
为什么我的python脚本这么慢?如何使它像MATLAB脚本一样运行? (是的,出于多种其他原因,我必须在python中执行此操作)
请注意,实际数据与>10,000
个元素的数组大小相对应,因此随着触发器的数量与元素的数量成正比地增加,时间的差异就变得非常大。
编辑:
我尝试不使用numpy
(随机数生成除外)而仅使用列表进行了相同的操作:
import numpy as np
tmax = 1000
array = np.random.rand(tmax)
array = list(array)
tstart = 500
meanA = sum((array[tstart:]))/len(array[tstart:])
quantity = [0] * (tmax-tstart)
for f in range(1,tmax-tstart,1):
count = 0
integrand = 0
for ff in range(tstart+1,tmax-f):
count += 1
dAt = array[ff] - meanA
dAtt = array[ff:ff+f+1] - meanA
try:
integrand += sum([dAt * i for i in dAtt])
except:
integrand += dAt * dAtt
if count != 0:
integrand /= f*count
quantity[f] = integrand
结果是:
$ time python3 speedAutoCorr2.py
real 0m6.510s
user 0m6.731s
sys 0m0.123s
这比numpy
的情况还要糟糕。