LASTM在烤宽面条中可能出现的问题

时间:2016-02-25 11:01:49

标签: machine-learning lstm lasagne

使用教程中给出的LSTM的简单构造函数,以及维度[,1]的输入,可以看到形状[,NUM_UNITS]。 但无论构造期间传递的num_units如何,输出都具有与输入相同的形状。

以下是复制此问题的最小代码...

    import lasagne
    import theano
    import theano.tensor as T
    import numpy as np

    num_batches= 20
    sequence_length= 100
    data_dim= 1
    train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)

    #As in the tutorial
    forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
    l_lstm = lasagne.layers.LSTMLayer(
                                     (num_batches,sequence_length, data_dim), 
                                     num_units=8,
                                     forgetgate=forget_gate
                                     )

    lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)

    lstm_out = lasagne.layers.get_output(l_lstm, {l_lstm:lstm_in})
    f = theano.function([lstm_in], lstm_out)
    lstm_output_np= f(train_data_3)

    lstm_output_np.shape
    #= (20, 100, 1)

不合格的LSTM(我的意思是在默认模式下)应该为每个单元产生一个输出吗? 该代码在kaixhin的cuda lasagne docker图像docker image上运行 是什么赋予了? 谢谢!

1 个答案:

答案 0 :(得分:0)

您可以使用lasagne.layers.InputLayer

来解决这个问题
import lasagne
import theano
import theano.tensor as T
import numpy as np

num_batches= 20
sequence_length= 100 
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)

#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
input_layer = lasagne.layers.InputLayer(shape=(num_batches, # <-- change
              sequence_length, data_dim),)  # <-- change
l_lstm = lasagne.layers.LSTMLayer(input_layer,  # <-- change
                                 num_units=8,
                                 forgetgate=forget_gate
                                 )

lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)

lstm_out = lasagne.layers.get_output(l_lstm, lstm_in)  # <-- change
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)

print lstm_output_np.shape

如果您将输入提供给input_layer,则它不再模糊,因此您甚至不需要指定输入应该去的位置。直接指定形状并将tensor3添加到LSTM中不起作用。