我正在尝试训练LSTM,
from __future__ import print_function
import mxnet as mx
import numpy as np
from mxnet import nd, autograd, sym
from mxnet import gluon
ctx = mx.cpu()
LIMIT = 20
data = np.array([(s, 1) for s in spanish_sentences[LIMIT]] + [(s, 0) for s in english_sentences[LIMIT]])
layer = mx.gluon.rnn.LSTM(100, 3)
net = mx.gluon.nn.Dense(2)
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
layer.initialize(ctx=ctx)
net.collect_params().initialize(ctx=ctx)
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
for epoch in range(10):
np.random.shuffle(data)
losses = []
for s, l in data:
if len(s) == 0:
continue
x = nd.array([ord(c) for c in s]).reshape(shape=(-1, 1, 1))
y = nd.array([np.eye(2)[int(l)]])
with autograd.record():
output = layer(x)[output.shape[0]-1, :, :]
pred = net(output)
loss = softmax_cross_entropy(pred, y)
losses.append(loss.asscalar())
trainer.step(1, ignore_stale_grad=True)
print("Loss:", np.mean(losses), "+-", np.std(losses))
但我收到了错误,
---------------------------------------------------------------------------
MXNetError Traceback (most recent call last)
<ipython-input-31-12ab8d4ad733> in <module>()
30 output = layer(x)[output.shape[0]-1, :, :]
31 pred = net(output)
---> 32 loss = softmax_cross_entropy(pred, y)
33 losses.append(loss.asscalar())
34 trainer.step(1, ignore_stale_grad=True)
... Stack trace ...
MXNetError: Shape inconsistent, Provided=(1,2), inferred shape=(1,1)
做错了什么?当我测试pred
和y
的形状时,我发现它们都等于(1, 2)
。我不知道为什么期待(1, 1)
。
答案 0 :(得分:2)
这很简单。 SoftMaxCrossEntropy()(pred, label)
期望形状pred.shape = (BATCH_SIZE, N_LABELS)
和label.shape = (BATCH_SIZE,)
。
所以y = nd.array([l])
修复了它。