为什么在keras中共享图层会使图形构建极慢(tensorflow后端)

时间:2017-07-14 14:41:13

标签: tensorflow keras

我正在构建一个图表,其中输入被分成长度为30的张量列表。然后我在列表的每个元素上使用共享的RNN层。

编译模型需要约1分钟。它必须是这样的(为什么?)或者我做错了什么?

代码:

shared_lstm = keras.layers.LSTM(4, return_sequences=True)
shared_dense = TimeDistributed(keras.layers.Dense(1, activation='sigmoid'))

inp_train = keras.layers.Input([None, se.action_space, 3])

#  Split each possible measured label into a list:
inputs_train = [ keras.layers.Lambda(lambda x: x[:, :, i, :])(inp_train) for i in range(se.action_space) ]

# Apply the shared weights on each tensor:
lstm_out_train = [shared_lstm(x) for x in inputs_train]
dense_out_train = [(shared_dense(x)) for x in lstm_out_train]

# Merge the tensors again:
out_train = keras.layers.Lambda(lambda x: K.stack(x, axis=2))(dense_out_train)

# "Pick" the unique element along where the inp_train tensor is == 1.0 (along axis=2, in the next time step, of the first dimension of axis=3)
# (please disregard this line if it seems too complex)
shift_and_pick_layer = keras.layers.Lambda(lambda x: K.sum(x[0][:, :-1, :, 0] * x[1][:, 1:, :, 0], axis=2))

out_train = shift_and_pick_layer([out_train, inp_train])


m_train = keras.models.Model(inp_train, out_train)

0 个答案:

没有答案