我试图在Julia中使用并行性来构造几个大型类型(特别是通过sklearn通过PyCall的高斯混合模型)。
如果我是这样做的,我会这样做:
models = Array(GMM, N)
for i = 1 : N
params = ...
models[i] = train_gmm(params)
end
但是,我应该能够做到这一点并行。我无法弄清楚从哪里开始,因为SharedArrays和@parallel对我来说似乎不对。
我试图使用@spawn,但发现了以下内容:
function f1()
rand(10000000)
rand(10000000)
rand(10000000)
rand(10000000)
rand(10000000)
end
function f2()
a = @spawn rand(10000000)
b = @spawn rand(10000000)
c = @spawn rand(10000000)
d = @spawn rand(10000000)
e = @spawn rand(10000000)
a_r = fetch(a)
b_r = fetch(b)
c_r = fetch(c)
d_r = fetch(d)
e_r = fetch(e)
end
f1()
f2()
println(@elapsed(f1()))
println(@elapsed(f2()))
f1需要0.21秒,f2需要0.32秒! @spawn有什么东西我错过了吗?
修改
看起来像在做:
function f1()
[sum(rand(100000000)),
sum(rand(100000000)),
sum(rand(100000000)),
sum(rand(100000000)),
sum(rand(100000000))]
end
function f2()
a = @spawn sum(rand(100000000))
b = @spawn sum(rand(100000000))
c = @spawn sum(rand(100000000))
d = @spawn sum(rand(100000000))
e = @spawn sum(rand(100000000))
[fetch(a), fetch(b), fetch(c), fetch(d), fetch(e)]
end
使f2()比f1()运行得更快,它更符合我的要求。除非有人有更好的官方方式,否则我会继续这样做。
谢谢。
答案 0 :(得分:5)
我认为你的编辑是正确的。
IAINMAC:~ idunning$ julia -p 3
julia> @everywhere function foo()
sleep(2)
end
julia> @time [foo(), foo(), foo()]
elapsed time: 6.017959282 seconds (294088 bytes allocated)
3-element Array{Nothing,1}:
nothing
nothing
nothing
julia> function bar()
a = @spawn foo()
b = @spawn foo()
c = @spawn foo()
[fetch(a), fetch(b), fetch(c)]
end
bar (generic function with 1 method)
julia> @time bar()
elapsed time: 2.030760103 seconds (199720 bytes allocated)
3-element Array{Nothing,1}:
nothing
nothing
nothing
或更优雅地使用pmap
:
julia> @everywhere function foo(a::Int)
sleep(a)
end
julia> @time pmap(foo,1:3)
elapsed time: 3.004821524 seconds (448540 bytes allocated)
3-element Array{Any,1}:
nothing
nothing
nothing
julia> @time map(foo,1:3)
elapsed time: 6.006557822 seconds (1368 bytes allocated)
3-element Array{Nothing,1}:
nothing
nothing
nothing