Keras_ERROR:“无法导入名称'_time_distributed_dense”

时间:2018-02-01 07:16:11

标签: python keras

由于Keras包装器尚不支持关注模型,我想参考以下自定义注意事项。

https://github.com/datalogue/keras-attention/blob/master/models/custom_recurrents.py

但问题是,当我运行上面的代码时,它会返回以下错误:

ImportError: cannot import name '_time_distributed_dense'

看起来不再有超过2.0.0的keras支持_time_distributed_dense

使用_time_distributed_dense模块的唯一部分是以下部分:

 def call(self, x):
        # store the whole sequence so we can "attend" to it at each timestep
        self.x_seq = x

        # apply the a dense layer over the time dimension of the sequence
        # do it here because it doesn't depend on any previous steps
        # thefore we can save computation time:
        self._uxpb = _time_distributed_dense(self.x_seq, self.U_a, b=self.b_a,
                                             input_dim=self.input_dim,
                                             timesteps=self.timesteps,
                                             output_dim=self.units)

        return super(AttentionDecoder, self).call(x)

我应该以哪种方式更改_time_distrubuted_dense(self ...)部分?

1 个答案:

答案 0 :(得分:1)

我刚从An Chen's answer of the GitHub issue发帖(该页面或他的答案将来可能会被删除)

def _time_distributed_dense(x, w, b=None, dropout=None,
                        input_dim=None, output_dim=None,
                        timesteps=None, training=None):
"""Apply `y . w + b` for every temporal slice y of x.
# Arguments
    x: input tensor.
    w: weight matrix.
    b: optional bias vector.
    dropout: wether to apply dropout (same dropout mask
        for every temporal slice of the input).
    input_dim: integer; optional dimensionality of the input.
    output_dim: integer; optional dimensionality of the output.
    timesteps: integer; optional number of timesteps.
    training: training phase tensor or boolean.
# Returns
    Output tensor.
"""
if not input_dim:
    input_dim = K.shape(x)[2]
if not timesteps:
    timesteps = K.shape(x)[1]
if not output_dim:
    output_dim = K.shape(w)[1]

if dropout is not None and 0. < dropout < 1.:
    # apply the same dropout pattern at every timestep
    ones = K.ones_like(K.reshape(x[:, 0, :], (-1, input_dim)))
    dropout_matrix = K.dropout(ones, dropout)
    expanded_dropout_matrix = K.repeat(dropout_matrix, timesteps)
    x = K.in_train_phase(x * expanded_dropout_matrix, x, training=training)

# collapse time dimension and batch dimension together
x = K.reshape(x, (-1, input_dim))
x = K.dot(x, w)
if b is not None:
    x = K.bias_add(x, b)
# reshape to 3D tensor
if K.backend() == 'tensorflow':
    x = K.reshape(x, K.stack([-1, timesteps, output_dim]))
    x.set_shape([None, None, output_dim])
else:
    x = K.reshape(x, (-1, timesteps, output_dim))
return x

您可以将其添加到您的Python代码中。