Julia MethodError:没有方法匹配(:: Dense {typeof(logistic),CuArray {Float32,2,Nothing},CuArray {Float32,1,Nothing}})(:: Float32)

时间:2019-11-30 14:21:55

标签: julia julia-gpu

我在CuArrays中有以下训练数据。

X: 300×8544 CuArray{Float32,2,Nothing}
y: 5×8544 Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}

我要训练以下模型:

# define activation
logistic(x) = 1. / (1 .+ exp.(-x))

# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
          Dense(64, c),
          softmax) |> gpu

# define the loss
loss(x, y) = Flux.crossentropy(model(x), y)

# define the optimiser
optimiser = ADAM()

但如果我这样做

Flux.train!(loss, params(model), zip(X, y), optimiser)

我收到以下错误:

MethodError: no method matching (::Dense{typeof(logistic),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::Float32)

我应该如何解决?

1 个答案:

答案 0 :(得分:0)

@ D.Danier请提供最少的工作示例(MWE),这意味着人们可以复制,粘贴和运行完整的代码。下面是一个示例

#Pkg.activate("c:/scratch/flux-test")

using CuArrays, Flux
CuArrays.allowscalar(false)

# define activation
# you don't the broadcast dots
logistic(x) = 1 / (1 + exp(-x))

# ensure your code works on GPU
CuArrays.@cufunc logistic(x) = 1 / (1 + exp(-x))

X = cu(rand(300, 8544))
y = cu(rand(5, 8544))
c = 5

# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
          Dense(64, c),
          softmax) |> gpu

# define the loss
loss(x, y) = Flux.crossentropy(model(x), y) |> gpu

model(X)

# define the optimiser
optimiser = ADAM()

loss(X, y)

Flux.train!(loss, params(model), zip(eachcol(X), eachcol(y)), optimiser)

Flux.train!时,必须告诉Flux您想将Xy的列配对以计算损失。顺便说一句,这可能不理想,因为它计算了太多的迭代。您可能需要将它们分组为迷你批。或者,如果您的问题确实如此之小,那么您可以例如一次性计算整个事情。

Flux.train!(loss, params(model), (X, y), optimiser)

基本上说是根据Xy的整体来计算损失。