替代numpy.gradient

时间:2016-08-03 20:30:06

标签: python gradient numerical-methods

我一直在努力寻找一个强大的函数来计算3D数组的渐变。 numpy.gradient支持最高二阶精度。有没有其他方法可以更准确地计算梯度?感谢。

3 个答案:

答案 0 :(得分:2)

我建议使用名为 Theano http://deeplearning.net/software/theano/)的符号库。它主要是为神经网络和深度学习设计的,但非常适合你想要的东西。

安装theano之后,这里有一个简单的代码,用于计算1-d向量的梯度。你可以自己扩展到3-d。

    import numpy as np
    import theano
    import theano.tensor as T

    x = T.dvector('x')
    J, updates = theano.scan(lambda i, x : (x[i+1] - x[i])/2, sequences=T.arange(x.shape[0] - 1), non_sequences=[x])
    f = theano.function([x], J, updates=updates)

    f(np.array([1, 2, 4, 7, 11, 16], dtype='float32'))
    f(np.array([1, 2, 4, 7.12345, 11, 16], dtype='float32'))

答案 1 :(得分:2)

最后我发现了这个:4阶梯度。 希望numpy也会整合这个......

https://gist.github.com/deeplycloudy/1b9fa46d5290314d9be02a5156b48741

def gradientO4(f, *varargs):
"""Calculate the fourth-order-accurate gradient of an N-dimensional scalar function.
Uses central differences on the interior and first differences on boundaries
to give the same shape.
Inputs:
  f -- An N-dimensional array giving samples of a scalar function
  varargs -- 0, 1, or N scalars giving the sample distances in each direction
Outputs:
  N arrays of the same shape as f giving the derivative of f with respect
   to each dimension.
"""
N = len(f.shape)  # number of dimensions
n = len(varargs)
if n == 0:
    dx = [1.0]*N
elif n == 1:
    dx = [varargs[0]]*N
elif n == N:
    dx = list(varargs)
else:
    raise SyntaxError, "invalid number of arguments"

# use central differences on interior and first differences on endpoints

#print dx
outvals = []

# create slice objects --- initially all are [:, :, ..., :]
slice0 = [slice(None)]*N
slice1 = [slice(None)]*N
slice2 = [slice(None)]*N
slice3 = [slice(None)]*N
slice4 = [slice(None)]*N

otype = f.dtype.char
if otype not in ['f', 'd', 'F', 'D']:
    otype = 'd'

for axis in range(N):       
    # select out appropriate parts for this dimension
    out = np.zeros(f.shape, f.dtype.char)

    slice0[axis] = slice(2, -2)
    slice1[axis] = slice(None, -4)
    slice2[axis] = slice(1, -3)
    slice3[axis] = slice(3, -1)
    slice4[axis] = slice(4, None)
    # 1D equivalent -- out[2:-2] = (f[:4] - 8*f[1:-3] + 8*f[3:-1] - f[4:])/12.0
    out[slice0] = (f[slice1] - 8.0*f[slice2] + 8.0*f[slice3] - f[slice4])/12.0

    slice0[axis] = slice(None, 2)
    slice1[axis] = slice(1, 3)
    slice2[axis] = slice(None, 2)
    # 1D equivalent -- out[0:2] = (f[1:3] - f[0:2])
    out[slice0] = (f[slice1] - f[slice2])

    slice0[axis] = slice(-2, None)
    slice1[axis] = slice(-2, None)
    slice2[axis] = slice(-3, -1)
    ## 1D equivalent -- out[-2:] = (f[-2:] - f[-3:-1])
    out[slice0] = (f[slice1] - f[slice2])


    # divide by step size
    outvals.append(out / dx[axis])

    # reset the slice object in this dimension to ":"
    slice0[axis] = slice(None)
    slice1[axis] = slice(None)
    slice2[axis] = slice(None)
    slice3[axis] = slice(None)
    slice4[axis] = slice(None)

if N == 1:
    return outvals[0]
else:
    return outvals

答案 2 :(得分:0)

我发现了这个:FINDIFF——一个用于有限差分数值导数和任意维数偏微分方程的 Python 包。

https://github.com/maroba/findiff

干杯!