Python 2.7内存泄漏与scipy.minimze

时间:2017-10-24 07:50:28

标签: python numpy memory-leaks scipy out-of-memory

在调整过程中,我的RAM内存缓慢但稳定(每两秒钟大约2.8 mb)增加,直到出现内存错误或终止程序。当我尝试通过将模型拟合到它们来进行大约80次测量时,会发生这种情况。通过使用scipy.minimze来最小化Chi_squared来完成此拟合。

到目前为止,我已经尝试过:

  • 每次Chi_squared调用我的模型时,使用Garbage collector进行收集都没有帮助。
  • 使用global()查看所有变量,然后使用pympler.asizeof查找我的变量占用的空间总量,这首先会增加,但会保持不变。
    • pympler.tracker.SummaryTracker也没有显示任何可变大小的增加。
  • 我也查看了memory_profiler,但没有发现任何相关内容。

从这些测试中,似乎我的RAM使用率上升,而我的变量占用的总空间是不变的。在我的记忆中,我真的很想知道。

下面的代码为我重现了这个问题:

import numpy as np
import scipy
import scipy.optimize as op
import scipy.stats
import scipy.integrate



def fit_model(model_pmt, x_list, y_list, PMT_parra, PMT_bounds=None, tolerance=10**-1, PMT_start_gues=None):
    result = op.minimize(chi_squared, PMT_start_gues, args=(x_list, y_list, model_pmt, PMT_parra[0], PMT_parra[1], PMT_parra[2]),
                     bounds=PMT_bounds, method='SLSQP', options={"ftol": tolerance})
    print result



def chi_squared(fit_parm, x, y_val, model, *non_fit_parm):
    parm = np.concatenate((fit_parm, non_fit_parm))
    y_mod = model(x, *parm)
    X2 = sum(pow(y_val - y_mod, 2))
    return X2



def basic_model(cb_list, max_intesity, sigma_e, noise, N, centre1, centre2, sigma_eb, min_dist=10**-5):
        """
        plateau function consisting of two gaussian CDF functions.
        """
        def get_distance(x, r):
            dist = abs(x - r)
            if dist < min_dist:
                dist = min_dist
            return dist

        def amount_of_material(x):
            A = scipy.stats.norm.cdf((x - centre1) / sigma_e)
            B = (1 - scipy.stats.norm.cdf((x - centre2) / sigma_e))
            cube =  A * B
            return cube

        def amount_of_field_INTEGRAL(x, cb):
        """Integral that is part of my sum"""
            result = scipy.integrate.quad(lambda r: scipy.stats.norm.pdf((r - cb) / sigma_b) / pow(get_distance(x, r), N),
                                          start, end, epsabs=10 ** -1)[0]
            return result



        # Set some constants, not important
        sigma_b = (sigma_eb**2-sigma_e**2)**0.5
        start, end = centre1 - 3 * sigma_e, centre2 + 3 * sigma_e
        integration_range = np.linspace(start, end, int(end - start) / 20)
        intensity_list = []

        # Doing a riemann sum, this is what takes the most time.
        for i, cb_point in enumerate(cb_list):
            intensity = sum([amount_of_material(x) * amount_of_field_INTEGRAL(x, cb_point) for x in integration_range])
            intensity *= (integration_range[1] - integration_range[0])
            intensity_list.append(intensity)


        model_values = np.array(intensity_list) / max(intensity_list)* max_intesity + noise
        return model_values


def get_dummy_data():
"""Can be ignored, produces something resembling my data with noise"""
    # X is just a range
    x_list = np.linspace(0, 300, 300)

    # Y is some sort of step function with noise
    A = scipy.stats.norm.cdf((x_list - 100) / 15.8)
    B = (1 - scipy.stats.norm.cdf((x_list - 200) / 15.8))
    y_list = A * B * .8 + .1 + np.random.normal(0, 0.05, 300)

    return x_list, y_list


if __name__=="__main__":
    # Set some variables
    start_pmt = [0.7, 8, 0.15, 0.6]
    pmt_bounds = [(.5, 1.3), (4, 15), (0.05, 0.3), (0.5, 3)]
    pmt_par = [110, 160, 15]
    x_list, y_list = get_dummy_data()

    fit_model(basic_model, x_list, y_list,  pmt_par, PMT_start_gues=start_pmt, PMT_bounds=pmt_bounds, tolerance=0.1)

感谢您的帮助!

1 个答案:

答案 0 :(得分:4)

我通过连续删除一层又一层的间接来缩小问题范围。 (@ joris267这是你在问之前应该自己做的事情。)重现问题的最小剩余代码如下所示:

import scipy.integrate

if __name__=="__main__":    
    while True:
        scipy.integrate.quad(lambda r: 0, 1, 100)

结论:

  1. 是的,有内存泄漏。
  2. 不,泄漏不在scipy.minimize但在scipy.quad
  3. 但是,这是known issue with scipy 0.19.0。升级到0.19.1应该可以解决这个问题,但我不确定,因为我自己还是0.19.0:)

    更新

    将scipy升级到0.19.1(为兼容性而将numpy升级到1.13.3)后,我的系统上的泄漏消失了。