批量标准化节点错误地链接在一起

时间:2018-10-01 08:03:07

标签: keras tensorboard

我正在使用BatchNormalization层训练Keras网络,并且在TensorBoard图上看到了一件奇怪的事情。我的网络由一堆1D卷积和后面的BatchNormalization层组成。根据TensorBoard的说法,大多数图形看起来都不错,但最开始的BatchNormalization层是向所有其他BatchNormalization层发送信息。这正常吗?

根据Keras model.summary()

,这是网络的输出
| Layer (type)                    | Output Shape      | Param # | Connected to        |
|---------------------------------|-------------------|---------|---------------------|
| pt_cloud_0 (InputLayer)         | (None, None, 39)  | 0       |                     |
| pt_cloud_1 (InputLayer)         | (None, None, 39)  | 0       |                     |
| conv1d_0_0 (Conv1D)             | (None, None, 64)  | 2560    | pt_cloud_0[0][0]    |
| conv1d_1_0 (Conv1D)             | (None, None, 64)  | 2560    | pt_cloud_1[0][0]    |
| batchnorm_0_0 (BatchNormalizati | (None, None, 64)  | 256     | conv1d_0_0[0][0]    |
| batchnorm_1_0 (BatchNormalizati | (None, None, 64)  | 256     | conv1d_1_0[0][0]    |
| conv1d_0_1 (Conv1D)             | (None, None, 64)  | 4160    | batchnorm_0_0[0][0] |
| conv1d_1_1 (Conv1D)             | (None, None, 64)  | 4160    | batchnorm_1_0[0][0] |
| batchnorm_0_1 (BatchNormalizati | (None, None, 64)  | 256     | conv1d_0_1[0][0]    |
| batchnorm_1_1 (BatchNormalizati | (None, None, 64)  | 256     | conv1d_1_1[0][0]    |
| conv1d_0_2 (Conv1D)             | (None, None, 316) | 20540   | batchnorm_0_1[0][0] |
| conv1d_1_2 (Conv1D)             | (None, None, 316) | 20540   | batchnorm_1_1[0][0] |
| batchnorm_0_2 (BatchNormalizati | (None, None, 316) | 1264    | conv1d_0_2[0][0]    |
| batchnorm_1_2 (BatchNormalizati | (None, None, 316) | 1264    | conv1d_1_2[0][0]    |
| conv1d_0_3 (Conv1D)             | (None, None, 316) | 100172  | batchnorm_0_2[0][0] |
| conv1d_1_3 (Conv1D)             | (None, None, 316) | 100172  | batchnorm_1_2[0][0] |
| aux_in (InputLayer)             | (None, 46)        | 0       | 0                   |
| batchnorm_0_3 (BatchNormalizati | (None, None, 316) | 1264    | conv1d_0_3[0][0]    |
| batchnorm_1_3 (BatchNormalizati | (None, None, 316) | 1264    | conv1d_1_3[0][0]    |
| aux_dense_0 (Dense)             | (None, 384)       | 18048   | aux_in[0][0]        |
| global_max_0 (GlobalMaxPooling1 | (None, 316)       | 0       | batchnorm_0_3[0][0] |
| global_max_1 (GlobalMaxPooling1 | (None, 316)       | 0       | batchnorm_1_3[0][0] |
| aux_dense_1 (Dense)             | (None, 384)       | 147840  | aux_dense_0[0][0]   |
| concatenate_1 (Concatenate)     | (None, 1016)      | 0       | global_max_0[0][0]  |
|                                 |                   |         | global_max_1[0][0]  |
|                                 |                   |         | aux_dense_1[0][0]   |
| dense_0 (Dense)                 | (None, 384)       | 390528  | concatenate_1[0][0] |
| dropout_0 (Dropout)             | (None, 384)       | 0       | dense_0[0][0]       |
| dense_1 (Dense)                 | (None, 384)       | 147840  | dropout_0[0][0]     |
| prediction (Dense)              | (None, 101)       | 38885   | dense_1[0][0]       |

这是TensorBoard graph中显示的图(的一部分) (如果看不到图像,请转到此链接:https://imgur.com/a/G74uIWE) 缩放版本:zoomed_graph或此链接:https://imgur.com/a/vtF3VWb

红色轮廓层是我在网络中创建的第一个批处理规范化层(batchnorm_0_0)。我对批处理规范化层的内部工作了解不多,但我发现它与所有其他BN层链接而其他BN层却不链接(它们只是连接到我分配的输入/输出,这很奇怪)他们)。 我想知道这是我的代码,keras还是TensorBoard中的错误?

更新:下面的模型代码;它的编写方式使我可以轻松地试验卷积层/过滤器等的数量...但是应该相当解释。

def _build(self, conv_filter_counts, dense_counts, dense_dropout_rates=None):
    """
    Builds the model. The model will have the following architecture:
      (1) [Per pointcloud] N 1D convolution layers (with possibly different depths) followed by BatchNormalization
                           layers.
      (2) [Per pointcloud] A global max pooling layer (calculating a 'global feature' of the point cloud).
      (3) [Once] M dense layers (with possibly different amounts of neurons), optionally followed by DropOut layers.
      (4) [Once] A final dense layer with `self.class_count` neurons and softmax activation.

    Arguments:
      conv_filter_counts: A list (length N) containing the succesive 1D convolution filter depths in (1)
      dense_counts: A list (length M) containing the amount of succesive neurons in (3)
      dense_dropout_rates: Optional. If specified, must be a list of length M containing the dropout rates
                           for each corresponding dense layer specified by `dense_counts`. Individual entries
                           can be set to None to disable dropout.
                           If not specified, dropout is applied nowhere.
    """
    inputs = [Input(shape=(None, self.pt_dim), name='pt_cloud_{}'.format(i)) for i in range(self.input_count)]
    if self.aux_input_count > 0:
        aux_input = Input(shape=(self.aux_input_count,), name='aux_in')

    if self.spatial_subnet:
        # Predict and apply spatial transform for each pointcloud.
        spatial_transforms = [transform_subnet(i, [64, 128, 256], [256, 64]) for i in inputs]
        inputs_tr = [apply_transform_layer(i, tr, self.pt_dim) for i, tr in zip(inputs, spatial_transforms)]
    else:
        inputs_tr = inputs

    global_feats = []
    for i, input_pts in enumerate(inputs_tr):
       x = input_pts

       # Convolution stack
       for j, c in enumerate(conv_filter_counts):
           x = Convolution1D(c, 1, activation='relu', name='conv1d_{}_{}'.format(i, j))(x)
           x = BatchNormalization(name='batchnorm_{}_{}'.format(i, j))(x)

       global_feats += [GlobalMaxPooling1D(name='global_max_{}'.format(i))(x)]

    # Concatenate features and possibly auxiliary input
    if self.aux_input_count > 0:
        x = aux_input

        # Create a dense subnetwork just for the auxiliary inpuy
        for i, (c, d) in enumerate(zip(dense_counts, dense_dropout_rates)):
            x = Dense(c, activation='relu', name='aux_dense_{}'.format(i))(x)

        x = Concatenate()(global_feats + [x])
    elif len(global_feats) > 1:
        x = Concatenate()(global_feats)
    else:
        x = global_feats[0]

    # Dense stack with optional dropout
    if dense_dropout_rates is None:
        dense_dropout_rates = [None] * len(dense_counts)

    for i, (c, d) in enumerate(zip(dense_counts, dense_dropout_rates)):
        x = Dense(c, activation='relu', name='dense_{}'.format(i))(x)
        if d is not None:
            x = Dropout(rate=d, name='dropout_{}'.format(i))(x)

    # Final prediction
    prediction = Dense(self.class_count, activation='softmax', name='prediction')(x)

    # Link all up in a model
    if self.aux_input_count > 0:
        inputs.append(aux_input)

    if len(inputs) == 1:
        inputs = inputs[0]

    return Model(inputs=inputs, outputs=prediction)

亲切的问候,

史蒂芬

1 个答案:

答案 0 :(得分:0)

对我自己的问题的谨慎回答,@ Mike,我认为(希望?)这确实是张量板方面的错误,因为我无法另外解释。

我使用keras.utils.plot_model绘制了架构,这也没有显示BatchNormalization层之间的任何链接。