如何在一维4D输入中多次应用Convolution1D

时间:2019-12-18 15:32:51

标签: python tensorflow keras neural-network conv-neural-network

假设我在keras中具有以下几层:

def _initialize_conv_layer(name):
    conv1 = Convolution1D(filters=1000,
                          kernel_size=5,
                          activation="relu",
                          name="conv_" + name,
                          padding="valid") 
    conv2 = GlobalMaxPooling1D(name="max_pool_" + name)
    conv3 = Activation("relu", name="act_" + name)
    conv4 = Dropout(rate=0.1, name="dropout_" + name)
    z = Dense(100, name="vector" + name)
    return conv1, conv2, conv3, conv4, z

和:

def _get_vector(self, input_, conv1, conv2, conv3, conv4, z):
    i1 = conv1(input_)
    i2 = conv2(i1)
    i3 = conv3(i2)
    i4 = conv4(i3)
    vector_ = z(i4)
    return vector_

还:

conv1, conv2, conv3, conv4, z = _initialize_conv_layer("message")
z1 = _get_vector(embedded_sequences, conv1, conv2, conv3, conv4, z)

其中:

  • embedded_sequences是维度的嵌入:(batch_size, 200, 100)
  • z1是以下维度的输出:(batch_size, 100)

我的问题是如何在其中应用相同的转化(而不是创建新的转化):

z2 = _get_vector(embedded_sequences2, conv1, conv2, conv3, conv4, z)
  • embedded_sequences2,尺寸:(batch_size, 50, 200, 100)
  • z2输出尺寸:(batch_size, 50, 100)

从角度来看,我想要对第二维(长度为50)的每一行应用相同的卷积。

我的理解是,我应该在TimeDistributed函数内应用Lambda。还是可能需要重塑数据?那是正确的吗?

任何想法怎么做?

0 个答案:

没有答案