Tensorflow 2.0:使用Reshape返回的形状推断无尺寸

时间:2020-05-29 18:01:32

标签: python tensorflow keras tensorflow2.0

我正在使用Tensorflow 2.0 + Keras上的CNN-LSTM模型进行序列分类。我的模型定义如下:

    inp = Input(input_shape)
    rshp = Reshape((input_shape[0]*input_shape[1], 1), input_shape=input_shape)(inp)
    cnn1 = Conv1D(100, 9, activation='relu')(rshp)
    cnn2 = Conv1D(100, 9, activation='relu')(cnn1)
    mp1 = MaxPooling1D((3,))(cnn2)
    cnn3 = Conv1D(50, 3, activation='relu')(mp1)
    cnn4 = Conv1D(50, 3, activation='relu')(cnn3)
    gap1 = AveragePooling1D((3,))(cnn4)
    dropout1 = Dropout(rate=dropout[0])(gap1)
    flt1 = Flatten()(dropout1)
    rshp2 = Reshape((input_shape[0], -1), input_shape=flt1.shape)(flt1)
    bilstm1 = Bidirectional(LSTM(240,
                                 return_sequences=True,
                                 recurrent_dropout=dropout[1]),
                            merge_mode=merge)(rshp2)
    dense1 = TimeDistributed(Dense(30, activation='relu'))(rshp2)
    dropout2 = Dropout(rate=dropout[2])(dense1)
    prediction = TimeDistributed(Dense(1, activation='sigmoid'))(dropout2)

    model = Model(inp, prediction, name="CNN-bLSTM_per_segment")
    print(model.summary(line_length=75))

input_shape = (60, 60)所在的位置。但是,此定义会引发以下错误:

TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'

起初,我认为这是因为rshp2层无法将flt1输出调整为(60, X)形状。因此,我在Bidirectional(LSTM))层之前添加了一个打印块:

    print('reshape1: ', rshp.shape)
    print('cnn1: ', cnn1.shape)
    print('cnn2: ', cnn2.shape)
    print('mp1: ', mp1.shape)
    print('cnn3: ', cnn3.shape)
    print('cnn4: ', cnn4.shape)
    print('gap1: ', gap1.shape)
    print('flatten 1: ', flt1.shape)
    print('reshape 2: ', rshp2.shape)

形状是:

reshape 1:  (None, 3600, 1)
cnn1:  (None, 3592, 100)
cnn2:  (None, 3584, 100)
mp1:  (None, 1194, 100)
cnn3:  (None, 1192, 50)
cnn4:  (None, 1190, 50)
gap1:  (None, 396, 50)
flatten 1:  (None, 19800)
reshape 2:  (None, 60, None)

查看flt1层,其输出形状为(19800,),可以将其重塑为(60, 330),但由于某种原因,{{1}中的(60, -1)印刷品rshp2所证明的}层未按预期工作。当我尝试重塑为reshape 2: (None, 60, None)时,效果很好。有人知道为什么(60, 330)无法正常工作吗?

1 个答案:

答案 0 :(得分:1)

-1正在工作。

根据Reshape文档,https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape

该层返回张量为(batch_size,) + target_shape的张量

因此,批次大小保持不变,其他尺寸是根据您的target_shape计算的。

在文档中,查看最后一个示例,

# also supports shape inference using `-1` as dimension
model.add(tf.keras.layers.Reshape((-1, 2, 2)))
model.output_shape

(None, None, 2, 2)

如果您以目标形状传递-1,Keras将存储None,如果您希望该轴上的长度可变的数据,这将很有用,但是如果您的数据形状始终相同,将尺寸标注为硬编码,以便以后打印形状时放置尺寸标注。

N.B:同样,无需在功能性API中为中间层指定input_shape=input_shape。该模型将为您推断出这一点。

相关问题