为什么使用Cython

时间:2016-04-06 10:03:17

标签: python c numpy cython

我遇到了使用Cython将临时结果分配给数组的问题。在这里,我声明了test_arraysample-sizeweight_array,并使用for循环,将每个加权结果保存到res_arraytest_arrayweight_array都被定义为Cython中的C连续数组。 test.pyx和setup.py文件如下所示:

# test.pyx
import numpy as np
cimport numpy as np
import random
cimport cython
from cython cimport boundscheck, wraparound


@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
@cython.cdivision(True)
@cython.profile(True)
def cython_sample(int res_size, int sample_size, double[::1] all_data, double[::1] weight_array):
    # using c-contiguous array can speed up a little bit
    cdef int ii, jj
    cdef double tmp_res, dot_result
    cdef double[::1] tmp_sample = np.ones(sample_size, dtype=np.double)
    cdef double[::1] res_array = np.ones(res_size, dtype=np.double)

    ran = random.normalvariate   # generate random value as a test
    for ii in range(res_size):
        tmp_sample = all_data[ii:(ii + sample_size)]

        # inner product operation
        dot_result = 0.0
        for jj in range(sample_size):
            dot_result += tmp_sample[jj]*weight_array[jj]

        # save inner product result into array 
        res_array[ii] = dot_result
        #res_array[ii] = ran(10000,20000)

     return res_array

# setup.py
from setuptools import setup,find_packages
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy as np

ext = Extension("mycython.test", sources=["mycython/test.pyx"])
setup(ext_modules=cythonize(ext),
      include_dirs=[np.get_include()],
      name="mycython",     
      version="0.1",
      packages=find_packages(),
      author="me",
      author_email="me@example.com",
      url="http://example.com/")   

python test.py是:

import time
import random
import numpy as np
from strategy1 import __cyn__

sample_size = 3000
test_array = [random.random() for _ in range(300000)]
res_size = len(test_array) - sample_size + 1
weight_array = [random.random() for _ in range(sample_size)]
c_contig_store_array = np.ascontiguousarray(test_array, dtype=np.double)
c_contig_weigh_array = np.ascontiguousarray(weight_array, dtype=np.double)


replay = 100
start_time = time.time()
for ii in range(int(replay)):
    __cyn__.cython_sample(res_size, sample_size, c_contig_store_array, c_contig_weigh_array)
per_elapsed_time = (time.time() - start_time) / replay
print('Elapse time :: %g sec' % (per_elapsed_time))

所以我测试了两个场景:

# 1. when saving dot_result into 'res_array':
     res_array[ii] = dot_result

速度测试显示:Elapse time :: 0.821084 sec

# 2. when saving a random value ran(10000,20000) into 'res_array':
     res_array[ii] = ran(10000,20000)

速度测试显示:Elapse time :: 0.214591 sec

我使用ran(*,*)来测试代码的原因是我发现如果我在原始代码中注释res_array[ii] = dot_resultres_array[ii] = ran(10000,20000),速度几乎会增加30-100时间(Elapse time :: 0.00633394 sec)。然后我认为问题可能在于将dot_result值分配给res_array,由于将随机生成的双值ran(10000,20000)分配给res_array的速度很快,因此转为真快(几乎快4倍,如上所示)。

有什么方法可以解决这个问题吗?感谢

1 个答案:

答案 0 :(得分:3)

如果你没有使用dot_result的值,编译器将删除循环:

dot_result = 0.0
for jj in range(sample_size):
    dot_result += tmp_sample[jj]*weight_array[jj]

内循环花费大部分时间。

你的cython代码看起来像correlate(),你可以使用fft加速它:

from scipy import signal
res = signal.fftconvolve(c_contig_store_array, c_contig_weigh_array[::-1], mode="valid")