优化成对子结构

时间:2018-06-10 20:09:18

标签: python optimization

我必须计算大约1e6个物体的粒子速度的成对差异,每个物体大约有1个。 1e4颗粒。现在我使用itertools.combinations来循环遍历粒子,但是对于一个对象,我的代码已经超过30分钟。我想知道我还能做些什么来加速它达到可行的速度,因为并行化似乎并没有在python中增加太多。 cython是可行的吗?

以下是我的代码中的一个对象:

def pairwisevel(hist,velj,velk, xj, xk):
    vlos = (velj - velk) 
    if( (xj - xk) < 0.):
        vlos = - vlos
    hist.add_from_value(vlos)


for i in itertools.combinations(np.arange(0,int(particles_per_group[0]),1),2):
    pairwisevel(hist, pvel[i[0]], pvel[i[1]],\
                pcoords[i[0]], pcoords[i[1]])

1 个答案:

答案 0 :(得分:1)

我希望我明白你的理由。在这个例子中,我计算了一个粒子对象的直方图。但是如果你不想对所有1e6组(1e4 * 1e4 * 1e6 = 1e14)进行比较,这仍然需要几天时间。 在这个例子中,我使用Numba来完成任务。

<强>代码

<label class="color">
<input  type="checkbox" checked>
<span class="slider round"></span>
</label>

<强>性能

import numpy as np
import numba as nb
import time

#From Numba source
#Copyright (c) 2012, Anaconda, Inc.
#All rights reserved.

@nb.njit(fastmath=True)
def digitize(x, bins, right=False):
    # bins are monotonically-increasing
    n = len(bins)
    lo = 0
    hi = n

    if right:
        if np.isnan(x):
            # Find the first nan (i.e. the last from the end of bins,
            # since there shouldn't be many of them in practice)
            for i in range(n, 0, -1):
                if not np.isnan(bins[i - 1]):
                    return i
            return 0
        while hi > lo:
            mid = (lo + hi) >> 1
            if bins[mid] < x:
                # mid is too low => narrow to upper bins
                lo = mid + 1
            else:
                # mid is too high, or is a NaN => narrow to lower bins
                hi = mid
    else:
        if np.isnan(x):
            # NaNs end up in the last bin
            return n
        while hi > lo:
            mid = (lo + hi) >> 1
            if bins[mid] <= x:
                # mid is too low => narrow to upper bins
                lo = mid + 1
            else:
                # mid is too high, or is a NaN => narrow to lower bins
                hi = mid

    return lo

#Variant_1
@nb.njit(fastmath=True,parallel=True)
def bincount_comb_1(pvel,pcoords,bins):
  vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64)
  for i in nb.prange(pvel.shape[0]):
    for j in range(pvel.shape[0]):
      if( (pcoords[i] - pcoords[j]) < 0.):
        vlos = 0.
      else:
        vlos = (pvel[i] - pvel[j])

      dig_vlos=digitize(vlos, bins, right=False)
      vlos_binned[dig_vlos]+=1
  return vlos_binned

#Variant_2
#Is this also working?
@nb.njit(fastmath=True,parallel=True)
def bincount_comb_2(pvel,pcoords,bins):
  vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64)
  for i in nb.prange(pvel.shape[0]):
    for j in range(pvel.shape[0]):
      #only particles which fulfill this condition are counted?
      if( (pcoords[i] - pcoords[j]) < 0.):
        vlos = (pvel[i] - pvel[j])
        dig_vlos=digitize(vlos, bins, right=False)
        vlos_binned[dig_vlos]+=1
  return vlos_binned

#Variant_3
#Only counting once
@nb.njit(fastmath=True,parallel=True)
def bincount_comb_3(pvel,pcoords,bins):
  vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64)
  for i in nb.prange(pvel.shape[0]):
    for j in range(i,pvel.shape[0]):
      #only particles, where this condition is met are counted?
      if( (pcoords[i] - pcoords[j]) < 0.):
        vlos = (pvel[i] - pvel[j])
        dig_vlos=digitize(vlos, bins, right=False)
        vlos_binned[dig_vlos]+=1
  return vlos_binned


#Create some data to test
bins=np.arange(2,32)
pvel=np.random.rand(10_000)*35
pcoords=np.random.rand(10_000)*35

#first call has compilation overhead, we don't measure this
res_1=bincount_comb_1(pvel,pcoords,bins)
res_2=bincount_comb_2(pvel,pcoords,bins)

t1=time.time()
res=bincount_comb_1(pvel,pcoords,bins)
print(time.time()-t1)
t1=time.time()
res=bincount_comb_2(pvel,pcoords,bins)
print(time.time()-t1)
t1=time.time()
res=bincount_comb_3(pvel,pcoords,bins)
print(time.time()-t1)