使用 ImageDataGenerator 进行多类分割时训练 U-Net 的问题

时间:2021-07-19 11:01:39

标签: python tensorflow keras deep-learning semantic-segmentation

我正在处理的任务是多类分割(每个图像上有 0-3 个类)。我有一个有效的 U-Net 模型,可以在小数据集上训练得很好,然后我扩充了数据集,现在我有近 15k 512x512 灰度图像。我很自然地遇到了没有足够的硬件资源(RAM、GPU)的问题,所以我决定切换到 google colab 并使用 ImageDataGenerator。我遇到了目前无法解决的问题。

<块引用>

InvalidArgumentError: Conv2DSlowBackpropInput: out_backprop 的大小 与计算不匹配:实际 = 16,计算 = 32 spatial_dim: 2 输入:64 过滤器:2 输出:16 步长:2 膨胀:1 [[节点 模型/conv2d_transpose_1/conv2d_transpose(定义于 /usr/local/lib/python3.7/dist-packages/keras/backend.py:5360)]] [操作:__inference_train_function_3151]

对我来说唯一的解释是我没有很好地使用发电机。我已将数据结构化为:

path_to_dataset
│
└───images_dir
│   │
│   └─── images_subdir
│       │   img1.png
│       │   img2.png
│       │   ...
│   
└───masks_dir
│   │
│   └─── masks_subdir
│       │   img1.png
│       │   img2.png
│       │   ...

子目录仅用于使 ImageDataGenerator 工作。

data_gen_args = dict(rescale=1./255,)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# image_datagen.fit(images)
# mask_datagen.fit(masks)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_generator = image_datagen.flow_from_directory(
    '/content/drive/MyDrive/DP/preprocess_images/images/final_ds/orig_folder/',
    batch_size=16,
    class_mode=None,
    # color_mode='grayscale',
    seed=seed)
mask_generator = mask_datagen.flow_from_directory(
    '/content/drive/MyDrive/DP/preprocess_images/images/final_ds/seg_greyscale_folder/',
    batch_size=16,
    class_mode=None,
    # color_mode='grayscale',
    seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
callbacks = [
    ModelCheckpoint('unet_512.h5', verbose=1, save_best_only=True),
    EarlyStopping(patience=5, monitor='val_loss'),
    TensorBoard(log_dir='logs_unet512')
]

history = model.fit(train_generator,
                    verbose=1,
                    epochs=50,
                    callbacks=callbacks,
                    # class_weight=class_weights,
                    shuffle=False)

到目前为止,我没有处理为验证数据创建数据生成器,因为我什至无法使这部分工作。

对于好奇的人,这是模型。

    # IMG_HEIGHT=512, IMG_WIDTH=512, IMG_CHANNELS=1
    inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
    s = inputs

    # Contraction path
    c1 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(s)
    c1 = Dropout(0.1)(c1)
    c1 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c1)
    p1 = MaxPooling2D((2, 2))(c1)

    c2 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p1)
    c2 = Dropout(0.1)(c2)
    c2 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c2)
    p2 = MaxPooling2D((2, 2))(c2)

    c3 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p2)
    c3 = Dropout(0.2)(c3)
    c3 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c3)
    p3 = MaxPooling2D((2, 2))(c3)

    c4 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p3)
    c4 = Dropout(0.2)(c4)
    c4 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c4)
    p4 = MaxPooling2D(pool_size=(2, 2))(c4)

    c5 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p4)
    c5 = Dropout(0.3)(c5)
    c5 = Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c5)

    # Expansive path
    u6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c5)
    u6 = concatenate([u6, c4])
    c6 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u6)
    c6 = Dropout(0.2)(c6)
    c6 = Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c6)

    u7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c6)
    u7 = concatenate([u7, c3])
    c7 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u7)
    c7 = Dropout(0.2)(c7)
    c7 = Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c7)

    u8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(c7)
    u8 = concatenate([u8, c2])
    c8 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u8)
    c8 = Dropout(0.1)(c8)
    c8 = Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c8)

    u9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
    u9 = concatenate([u9, c1], axis=3)
    c9 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u9)
    c9 = Dropout(0.1)(c9)
    c9 = Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c9)

    # n_classes=4
    outputs = Conv2D(n_classes, (1, 1), activation='softmax')(c9)

    model = Model(inputs=[inputs], outputs=[outputs])

编辑:还计划增加过滤器的数量,到目前为止我正在运行以前在我的个人笔记本电脑上运行的模型

0 个答案:

没有答案