加快对Sympy符号表达式的评估

时间:2016-07-29 06:23:48

标签: python performance numpy matrix sympy

我目前正在研究的一个Python程序(高斯过程分类)是对Sympy符号矩阵的评估的瓶颈,我无法弄清楚我能做什么,如果有的话,可以加速它。我已经确保的程序的其他部分正确输入(就numpy数组而言),因此它们之间的计算可以正确地进行矢量化等。

我特别关注了Sympy的codegen函数(autowrap,binary_function),但因为我的ImmutableMatrix对象本身是符号矩阵元素的偏导数,所以有一长串的&#39 ; unhashable'阻止我使用codegen功能的东西。

我研究的另一种可能性是使用Theano - 但经过一些初步的基准测试后,我发现虽然它更快地构建了初始偏导数符号矩阵,但在评估时它似乎慢了几个数量级,与之相反我在寻找。

以下是我目前正在处理的代码的工作摘录摘录。

import theano
import sympy
from sympy.utilities.autowrap import autowrap
from sympy.utilities.autowrap import binary_function
import numpy as np
import math
from datetime import datetime

# 'Vectorized' cdist that can handle symbols/arbitrary types - preliminary benchmarking put it at ~15 times faster than python list comprehension, but still notably slower (forgot at the moment) than cdist, of course
def sqeucl_dist(x, xs):
    m = np.sum(np.power(
        np.repeat(x[:,None,:], len(xs), axis=1) -
        np.resize(xs, (len(x), xs.shape[0], xs.shape[1])),
        2), axis=2)
    return m


def build_symbolic_derivatives(X):
    # Pre-calculate derivatives of inverted matrix to substitute values in the Squared Exponential NLL gradient
    f_err_sym, n_err_sym = sympy.symbols("f_err, n_err")

    # (1,n) shape 'matrix' (vector) of length scales for each dimension
    l_scale_sym = sympy.MatrixSymbol('l', 1, X.shape[1])

    # K matrix
    print("Building sympy matrix...")
    eucl_dist_m = sqeucl_dist(X/l_scale_sym, X/l_scale_sym)
    m = sympy.Matrix(f_err_sym**2 * math.e**(-0.5 * eucl_dist_m) 
                     + n_err_sym**2 * np.identity(len(X)))


    # Element-wise derivative of K matrix over each of the hyperparameters
    print("Getting partial derivatives over all hyperparameters...")
    pd_t1 = datetime.now()
    dK_df   = m.diff(f_err_sym)
    dK_dls  = [m.diff(l_scale_sym) for l_scale_sym in l_scale_sym]
    dK_dn   = m.diff(n_err_sym)
    print("Took: {}".format(datetime.now() - pd_t1))

    # Lambdify each of the dK/dts to speed up substitutions per optimization iteration
    print("Lambdifying ")
    l_t1 = datetime.now()
    dK_dthetas = [dK_df] + dK_dls + [dK_dn]
    dK_dthetas = sympy.lambdify((f_err_sym, l_scale_sym, n_err_sym), dK_dthetas, 'numpy')
    print("Took: {}".format(datetime.now() - l_t1))
    return dK_dthetas


# Evaluates each dK_dtheta pre-calculated symbolic lambda with current iteration's hyperparameters
def eval_dK_dthetas(dK_dthetas_raw, f_err, l_scales, n_err):
    l_scales = sympy.Matrix(l_scales.reshape(1, len(l_scales)))
    return np.array(dK_dthetas_raw(f_err, l_scales, n_err), dtype=np.float64)


dimensions = 3 
X = np.random.rand(50, dimensions)
dK_dthetas_raw = build_symbolic_derivatives(X)

f_err = np.random.rand()
l_scales = np.random.rand(3)
n_err = np.random.rand()

t1 = datetime.now()
dK_dthetas = eval_dK_dthetas(dK_dthetas_raw, f_err, l_scales, n_err) # ~99.7%
print(datetime.now() - t1) 

在这个例子中,评估了5个50x50符号矩阵,即只有12,500个元素,耗时7秒。我花了很长时间寻找像这样的超速行动的资源,并尝试将其翻译成Theano(至少直到我发现它的评价在我的情况下更慢)并且在那里也没有运气。

任何帮助都非常感谢!

0 个答案:

没有答案