残留的LSTM层

时间:2019-01-15 11:52:57

标签: keras lstm deep-residual-networks

我很难理解喀拉拉邦LSTM层中的张量行为。

我已经预处理了看起来像[样本,时间步长,特征]的数字数据。因此有1万个样本,24个时间步长和10个预测变量。

我想堆叠剩余的连接,但是我不确定自己做对了吗

x <- layer_input(shape = c(24,10))

x <- layer_lstm(x,units=32,activation="tanh",return_sequences=T)

现在x的形状是张量[[,,?,32]。我期待[?,32,10]。我应该将数据重塑为[样本,特征,时间步长]吗?然后我形成res:

y <- layer_lstm(x,units=32,activation="tanh",return_sequences=T)

res <- layer_add(c(x, y))

现在我不确定这是否正确,或者我应该改用

x <- layer_input(shape = c(24,10))

y <- layer_lstm(x,units=24,activation="tanh",return_sequences=T) # same as time_steps

res <- layer_add(c(x,y)) ## perhaps here data reshaping is neccesary?

任何见解都值得赞赏。

JJ

1 个答案:

答案 0 :(得分:1)

LSTM层将为您返回暗淡为(?,seq_length,out_dims),其中out_dimsunits的情况。所以整体暗淡会是

x <- layer_input(shape = c(24,10))
# dims of x (?,24,10)
x <- layer_lstm(x,units=32,activation="tanh",return_sequences=T)
# dims of x after lstm_layer (?,24,32)

y <- layer_lstm(x,units=32,activation="tanh",return_sequences=T)
# dims of y (?,24,32)
res <- layer_add(c(x, y))
# dims of res will be (?,24,32), it is addion of output of both lstm_layer.

有关更多信息,您可以check-this