如何将多处理合并到现有的python代码中?

时间:2015-03-05 18:27:11

标签: python multiprocessing

我在现有的python代码中使用多处理时遇到了严重的问题。下面是我的程序gauss.py,它基本上适合高斯数据。合并多处理并使用不同的输入文件多次运行此脚本的正确方法是什么?我应该创建一个单独的.py脚本来调用此脚本作为函数吗?或者,我是否在所有现有代码下面包含一个主要部分?

此外,我当前在执行脚本时手动输入命令行中的数据输入文件。我猜这需要改成某种队列格式吗?

import json
import sys
import numpy
import pylab
from numpy import * #log, exp, pi
import scipy.stats, scipy
import pymultinest
import os

#-------

infile = sys.argv[1]
file = infile[-5:]
outfile = "out/test/"+init.target+file

wave,flux = numpy.loadtxt(infile, usecols=(0,1), unpack=True)

import init #initialization file

x       = wave[init.start:init.end]
ydata   = abs(flux[init.start:init.end])
maxy    = max(flux[init.start:init.end])
textpos = (.1*(init.plotmax-init.plotmin))+init.plotmin
systemic= (1.+init.red)*init.orig_wave
cont    = flux[init.low1:init.upp1] #select continuum adjacent to emission line
avg     = sum(cont)/len(cont)

stdev   = numpy.std(cont) #stnd dev of continuum flux
noise   = stdev * numpy.sqrt(ydata / avg) #signal dependant noise

dum = 0

###### GAUSSIAN MODEL ######

def make_gauss(mu, sigma, N):

    s = -1.0 / (2 * sigma * sigma)
    def f(x):
        return N * numpy.exp(s * (x - mu)*(x - mu))
    return f

def model1(pos1, width1, height1):
    gaussian1 = make_gauss(pos1, width1, height1)
    return  gaussian1(x) + avg

def prior(cube, ndim, nparams):
    cube[0] = init.minwave + (cube[0]*init.wave_range)      # uniform wavelength prior
    cube[1] = init.minwidth + (cube[1]*(init.maxwidth-init.minwidth))   # uniform width prior
    cube[2] = init.fluxsigma * stdev * 10**(cube[2]*5)  # log-uniform flux prior 

# ----------------------
# analyse with 1 gaussian

def loglike1(cube, ndim, nparams):
    pos1, width1, height1 = cube[0], cube[1], cube[2]
    ymodel1 = model1(pos1, width1, height1)
    loglikelihood =-0.5 * (((ymodel1 - ydata) / noise)**2).sum()
    return loglikelihood

# number of dimensions our problem has
parameters = ["pos1", "width1", "height1"]
n_params = len(parameters)

# run MultiNest
pymultinest.run(loglike1, prior, n_params, outputfiles_basename=outfile + '_1_', n_live_points = 200, multimodal = False, resume = False, verbose = False)

1 个答案:

答案 0 :(得分:0)

您可以在loglike1中并行处理。

但最好用MPI运行脚本。例如," mpirun -np 4"将运行您的脚本4次。 Multinest意识到它处于MPI模式并调度可能性调用。如果安装了mpi4py,pymultines会自动加载最多的MPI库。