使用大型数组使用NumPy在3D中实现高斯的有效求和

时间:2018-09-06 16:21:19

标签: python numpy scipy gaussian

我有一个M x 3的3D坐标数组, coords (M〜1000-10000),我想在网格网格3D数组上计算以这些坐标为中心的高斯和。网格3D阵列通常大约为64 x 64 x 64,但有时会超过256 x 256 x 256,并且甚至可能更大。我已按照this question开始使用,将网格网格数组转换为N x 3个坐标数组 xyz ,其中N为64 ^ 3或256 ^ 3,依此类推。 ,对于大型数组,要对整个计算进行向量化会占用太多内存(这是可以理解的,因为它可能会接近1e11个元素并消耗1 TB的RAM),因此我将其分解为M个坐标的循环。但是,这太慢了。

我想知道是否有任何方法可以在不使内存过载的情况下加快速度。通过将meshgrid转换为xyz,我觉得我失去了网格等距分布的任何优势,并且通过某种方式,也许有了scipy.ndimage,我应该能够利用均匀间距来加快处理速度。 / p>

这是我最初的开始:

import numpy as np
from scipy import spatial

#create meshgrid
side = 100.
n = 64 #could be 256 or larger
x_ = np.linspace(-side/2,side/2,n)
x,y,z = np.meshgrid(x_,x_,x_,indexing='ij')

#convert meshgrid to list of coordinates
xyz = np.column_stack((x.ravel(),y.ravel(),z.ravel()))

#create some coordinates
coords = np.random.random(size=(1000,3))*side - side/2

def sumofgauss(coords,xyz,sigma):
    """Simple isotropic gaussian sum at coordinate locations."""
    n = int(round(xyz.shape[0]**(1/3.))) #get n samples for reshaping to 3D later
    #this version overloads memory
    #dist = spatial.distance.cdist(coords, xyz)
    #dist *= dist
    #values = 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-dist/(2*sigma**2))
    #values = np.sum(values,axis=0)
    #run cdist in a loop over coords to avoid overloading memory
    values = np.zeros((xyz.shape[0]))
    for i in range(coords.shape[0]):
        dist = spatial.distance.cdist(coords[None,i], xyz)
        dist *= dist
        values += 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-dist[0]/(2*sigma**2))
    return values.reshape(n,n,n)

image = sumofgauss(coords,xyz,1.0)

import matplotlib.pyplot as plt
plt.imshow(image[n/2]) #show a slice
plt.show()

M = 1000,N = 64(〜5秒): Sum of Gaussians in 3D Slice; N = 64

M = 1000,N = 256(〜10分钟): Sum of Gaussians in 3D Slice; N = 256

1 个答案:

答案 0 :(得分:1)

考虑到许多距离计算将使指数后的权重为零,因此您可能会掉很多距离。使用KDTree进行大距离计算通常会更快,而丢弃大于阈值的距离:

import numpy as np
from scipy.spatial import cKDTree # so we can get a `coo_matrix` output

def gaussgrid(coords, sigma = 1, n = 64, side = 100, eps = None):
    x_ = np.linspace(-side/2,side/2,n)
    x,y,z = np.meshgrid(x_,x_,x_,indexing='ij')
    xyz = np.column_stack((x.ravel(),y.ravel(),z.ravel()))
    if eps is None:
        eps = np.finfo('float64').eps
    thr = -np.log(eps) * 2 * sigma**2
    data_tree = cKDTree(coords)
    discr = 1000 # you can tweak this to get best results on your system
    values = np.empty(n**3)
    for i in range(n**3//discr + 1):
        slc = slice(i * discr, i * discr + discr)
        grid_tree = cKDTree(xyz[slc])
        dists = grid_tree.sparse_distance_matrix(data_tree, thr, output_type = 'coo_matrix')
        dists.data = 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-dists.data/(2*sigma**2))
        values[slc] = dists.sum(1).squeeze()
    return values.reshape(n,n,n)

现在,即使您保持eps = None的速度也会更快一点,因为您仍然可以返回大约10%的距离,但是在eps = 1e-6左右的情况下,您应该得到较大的加速。在我的系统上:

%timeit out = sumofgauss(coords, xyz, 1.0)
1 loop, best of 3: 23.7 s per loop

%timeit out = gaussgrid(coords)
1 loop, best of 3: 2.12 s per loop

%timeit out = gaussgrid(coords, eps = 1e-6)
1 loop, best of 3: 382 ms per loop