有没有更好的方法来制作3D密度功能?
def make_spot_3d(bright, spread, x0,y0,z0):
# Create x and y indices
x = np.linspace(-50, 50, 200)
y = np.linspace(-50, 50, 200)
z = np.linspace(-50, 50, 200)
X, Y, Z = np.meshgrid(x, y, z)
Intensity = np.uint16(bright*np.exp(-((X-x0)/spread)**2
-((Y-y0)/spread)**2
-((Z-z0)/spread)**2))
return Intensity
该函数可以生成一个3D numpy数组,可以使用mayavi
绘制然而,当该函数用于生成斑点簇(~100)时,如下所示:
Spots = np.asarray([make_spot_3d(100,2, *loc) for loc in locations])
cluster = np.sum(Spots, axis=0)
屈服,例如:
执行时间大约是1分钟(cpu i5);我打赌这可能会更快。
答案 0 :(得分:2)
一个显而易见的改进是使用广播来评估你在'稀疏'网格而不是完整meshgrid
上的强度函数,例如:
X, Y, Z = np.meshgrid(x, y, z, sparse=True)
这会在我的机器上将运行时间缩短约4倍:
%timeit make_spot_3d(1., 1., 0, 0, 0)
1 loops, best of 3: 1.56 s per loop
%timeit make_spot_3d_ogrid(1., 1., 0, 0, 0)
1 loops, best of 3: 359 ms per loop
您可以通过对位置,点差和亮度进行矢量化来消除列表理解中涉及的开销,例如:
def make_spots(bright, spread, x0, y0, z0):
# Create x and y indices
x = np.linspace(-50, 50, 200)
y = np.linspace(-50, 50, 200)
z = np.linspace(-50, 50, 200)
# this will broadcast out to an (nblobs, ny, nx, nz) array
dx = x[None, None, :, None] - x0[:, None, None, None]
dy = y[None, :, None, None] - y0[:, None, None, None]
dz = z[None, None, None, :] - z0[:, None, None, None]
spread = spread[:, None, None, None]
bright = bright[:, None, None, None]
# we can save time by performing the exponentiation over 2D arrays
# before broadcasting out to 4D, since exp(a + b) == exp(a) * exp(b)
s2 = spread * spread
a = np.exp(-(dx * dx) / s2)
b = np.exp(-(dy * dy) / s2)
c = np.exp(-(dz * dz) / s2)
intensity = bright * a * b * c
return intensity.astype(np.uint16)
其中bright
,spread
,x0
,y0
和z0
是1D向量。这将生成一个(nblobs, ny, nx, nz)
数组,然后您可以在第一个轴上求和。根据您生成的Blob数量以及您正在评估它们的网格大小,创建此中间数组可能会在内存方面变得非常昂贵。
另一种选择是初始化单个(ny, nx, nz)
输出数组并就地计算总和:
def sum_spots_inplace(bright, spread, x0, y0, z0):
# Create x and y indices
x = np.linspace(-50, 50, 200)
y = np.linspace(-50, 50, 200)
z = np.linspace(-50, 50, 200)
dx = x[None, None, :, None] - x0[:, None, None, None]
dy = y[None, :, None, None] - y0[:, None, None, None]
dz = z[None, None, None, :] - z0[:, None, None, None]
spread = spread[:, None, None, None]
bright = bright[:, None, None, None]
s2 = spread * spread
a = np.exp(-(dx * dx) / s2)
b = np.exp(-(dy * dy) / s2)
c = np.exp(-(dz * dz) / s2)
out = np.zeros((200, 200, 200), dtype=np.uint16)
for ii in xrange(bright.shape[0]):
out += bright[ii] * a[ii] * b[ii] * c[ii]
return out
这将需要更少的内存,但潜在的缺点是它需要在Python中循环。
让您了解相对表现:
def sum_spots_listcomp(bright, spread, x0, y0, z0):
return np.sum([make_spot_3d(bright[ii], spread[ii], x0[ii], y0[ii], z0[ii])
for ii in xrange(len(bright))], axis=0)
def sum_spots_vec(bright, spread, x0, y0, z0):
return make_spots(bright, spread, x0, y0, z0).sum(0)
# some fake data
bright = np.random.rand(10) * 100
spread = np.random.rand(10) * 100
x0 = (np.random.rand(10) - 0.5) * 50
y0 = (np.random.rand(10) - 0.5) * 50
z0 = (np.random.rand(10) - 0.5) * 50
%timeit sum_spots_listcomp(bright, spread, x0, y0, z0)
# 1 loops, best of 3: 16.6 s per loop
%timeit sum_spots_vec(bright, spread, x0, y0, z0)
# 1 loops, best of 3: 1.03 s per loop
%timeit sum_spots_inplace(bright, spread, x0, y0, z0)
# 1 loops, best of 3: 330 ms per loop
答案 1 :(得分:1)
由于您拥有i5处理器并且这些点彼此独立,因此实现多线程会很不错。您不一定需要多个进程,因为许多Numpy操作会释放GIL。附加代码非常简单:
from multiprocessing.dummy import Pool
if __name__ == '__main__':
wrap = lambda pos: make_spot_3d(100, 2, *pos)
cluster = sum(Pool().imap_unordered(wrap, positions))
<强>更新强>
在我的PC上进行一些测试后,我必须承认上面的代码太天真和低效。在8核上,相对于单核性能,加速仅为约1.5倍。
我仍然认为多线程是一个好主意,但成功在很大程度上取决于实现。
答案 2 :(得分:0)
所以,你所做的每项操作都要花费800万(= 200 * 200 * 200)次;首先,你可以通过计算if的八分之一并将其镜像,将其减少到100万(如果球体恰好位于网格的中心)。镜像不是免费的,但仍然比exp
便宜得多。
此外,你很可能在强度值降至0后停止计算。使用一点对数魔法,你可以想出一个可能远小于200 * 200 *的感兴趣区域。 200格。