当谈到Keras的平均合并层时,我有点困惑。 The documentation声明以下内容:
AveragePooling1D:时间数据的平均池。
参数
pool_size: Integer, size of the average pooling windows. strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs.
channels_last对应于具有形状(批处理,步骤, 功能),而channels_first对应于具有形状的输入 (批次,功能,步骤)。
输入形状
If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps)
输出形状
If data_format='channels_last': 3D tensor with shape: (batch_size, downsampled_steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, downsampled_steps)
和
GlobalAveragePooling1D:时间数据的全局平均池化操作。
参数
data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs.
channels_last对应于具有形状(批处理,步骤, 功能),而channels_first对应于具有形状的输入 (批次,功能,步骤)。
输入形状
If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps)
输出形状
具有以下形状的2D张量:(batch_size,特征)
我(认为我)确实得到了平均池化的概念,但是我并不真正理解为什么GlobalAveragePooling1D层只是简单地删除了steps参数。非常感谢您的回答。
答案 0 :(得分:1)
GlobalAveragePooling1D
与AveragePooling1D
与pool_size=steps
相同。因此,对于每个功能维,所有时间步均取平均值。因此,输出的形状为(batch_size, 1, features)
(如果data_format='channels_last'
)。它们只是使第二维(如果是data_format='channels_first'
,则变平),这就是使输出形状等于(batch_size, features)
的方式。