我可以用numpy加速这个循环吗?

时间:2014-01-27 20:54:37

标签: python numpy

晚上好,

我正在尝试加快此代码中的循环。我已经阅读了numpy docs,但无济于事。 np.accumulate看起来几乎是我需要的东西,但并不完全。

我该怎么做才能加快循环速度?

import numpy as np

N       = 1000
AR_part = np.random.randn(N+1)
s2      = np.ndarray(N+1)
s2[0]   = 1.0

beta = 1.3

old_s2  = s2[0]
for t in range( 1, N+1 ):               
    s2_t    = AR_part[ t-1 ] + beta * old_s2
    s2[t]   = s2_t        
    old_s2  = s2_t

在回应沃伦时,我更新了我的代码:

将numpy导入为np 来自scipy.signal import lfilter,lfiltic

N       = 1000
AR_part = np.random.randn(N+1)

beta = 1.3

def method1( AR_part):
    s2      = np.empty_like(AR_part)
    s2[0]   = 1.0
    old_s2  = s2[0]
    for t in range( 1, N+1 ):               
        s2_t    = AR_part[ t-1 ] + beta * old_s2
        s2[t]   = s2_t        
        old_s2  = s2_t
    return s2

def method2( AR_part):
    y = np.empty_like(AR_part)
    b = np.array([0, 1])
    a = np.array([1, -beta])

    # Initial condition for the linear filter.
    zi = lfiltic(b, a, [1.0], AR_part[:1])

    y[:1] = 1.0
    y[1:], zo = lfilter(b, a, AR_part[1:], zi=zi)

    return y    

s2 = method1( AR_part )
y = method2( AR_part )
np.alltrue( s2==y )

计时代码:

%timeit method1( AR_part )
100 loops, best of 3: 1.63 ms per loop
%timeit method2( AR_part )
10000 loops, best of 3: 129 us per loop

这表明Warren的方法快了10倍!非常令人印象深刻!

3 个答案:

答案 0 :(得分:5)

您的递归关系是线性的,因此可以将其视为线性过滤器。您可以使用scipy.signal.lfilter来计算s2。我最近在这里回答了类似的问题:python recursive vectorization with timeseries

这是一个脚本,显示如何使用lfilter来计算您的系列:

import numpy as np
from scipy.signal import lfilter, lfiltic


np.random.seed(123)

N       = 4
AR_part = np.random.randn(N+1)
s2      = np.ndarray(N+1)
s2[0]   = 1.0

beta = 1.3

old_s2  = s2[0]
for t in range( 1, N+1 ):               
    s2_t    = AR_part[ t-1 ] + beta * old_s2
    s2[t]   = s2_t        
    old_s2  = s2_t


# Compute the result using scipy.signal.lfilter.

# Transfer function coefficients.
# `b` is the numerator, `a` is the denominator.
b = np.array([0, 1])
a = np.array([1, -beta])

# Initial condition for the linear filter.
zi = lfiltic(b, a, s2[:1], AR_part[:1])

# Apply lfilter to AR_part.
y = np.empty_like(AR_part)
y[:1] = s2[:1]
y[1:], zo = lfilter(b, a, AR_part[1:], zi=zi)

# Compare the results
print "s2 =", s2
print "y  =", y

输出:

s2 = [ 1.          0.2143694   1.27602566  1.94181186  1.0180607 ]
y  = [ 1.          0.2143694   1.27602566  1.94181186  1.0180607 ]

答案 1 :(得分:3)

我不确定加速循环还有很多工作......我看到的唯一方法是避免递归,即为每个t直接计算s2 [t]。但这也很昂贵......

你有

s2[t] = AR_part[t-1] + beta * s2[t-1]
= AR_part[t-1] + beta * (AR_part[t-2] + beta * s2[t-2])
= AR_part[t-1] + beta * AR_part[t-2] + beta^2 * s2[t-2]
= np.dot( AR[:t-1], beta_powers[-(t-1):]  )

beta_powers包含[beta ^ 1000,beta ^ 999,... 1.0]。您可以通过以下方式创建beta_powers:

np.power(beta, np,arange(1000))[::-1].

但我看不出比你的循环更快地计算这些东西的方法......

但是你可以重写它:

for t in range(N):
    s2[t+1] = AR_part[t] + beta * s2[t]

答案 2 :(得分:0)

我同意GHL你不会得到更多的表现(虽然如果N真的很大,而你只计算向量s2的某些部分,绝对使用他的方法),但这是一种不同的方式来做你正在看的东西:

import numpy as np

N       = 1000
AR_part = np.random.randn(N+1)
beta = 1.3


def seq_gen(beta, constants, first_element = 1.0):
    next_element = first_element
    yield next_element
    for j in constants:
        next_element = j + beta * next_element
        yield next_element

s2 = np.array([j for j in seq_gen(beta, AR_part, 1.0)])