如何防止Tensorflow输入生成批次尺寸

时间:2020-10-24 12:05:23

标签: python tensorflow machine-learning keras keras-layer

我最近已经更新到Tensorflow 2.3.1的最新版本,并且在更新后我的模型不再起作用:

model = tf.keras.Sequential([
        layers.Input(shape= input_shape), # input_shape:  (1623, 105, 105, 3)
        layers.experimental.preprocessing.Rescaling(1./255),
        layers.Conv2D(32, 3, activation='relu'),
        layers.MaxPooling2D(),
        layers.Conv2D(32, 3, activation='relu'),
        layers.MaxPooling2D(),
        layers.Conv2D(32, 3, activation='relu'),
        layers.MaxPooling2D(),
        layers.Flatten(),
        layers.Dense(128, activation='relu'),
        layers.Dense(ds_info.features['label'].num_classes)
    ])

问题在于Input图层添加了一个新的batch_size维度,这又导致了以下错误:

Input 0 of layer max_pooling2d_22 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [None, 1623, 103, 103, 32]

如何防止生成该错误或解决此问题。

1 个答案:

答案 0 :(得分:1)

指定输入形状时,需要省略样本数。那是因为Keras可以接受任何数字。所以试试这个:

layers.Input(shape = input_shape[1:]),

这将指定输入形状为(rows, columns, channels),省略了样本数量。