Softmax在Keras中造成很高的损耗值

时间:2019-05-08 10:10:58

标签: python keras deep-learning

我正在研究Keras项目,我使用神经网络来分割图像。

该模型基于我在3D中实现的Unet模型。 https://en.wikipedia.org/wiki/U-Net

这是我在Keras中实现的Unet:

def createUnet3D(n_ch, patch_size, nbclass, deph,multi_gpu=False,nbGPU = 2):
    inputs = Input((deph,patch_size, patch_size, 1))
    conv1 = Conv3D(16, (3, 3, 3), activation='relu', padding='same')(inputs)
    conv1 = Conv3D(16, (3, 3, 3), activation='relu', padding='same')(conv1)
    pool1 = MaxPooling3D(pool_size=(2,2,2))(conv1)

    conv2 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(pool1)
    conv2 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv2)
    pool2 = MaxPooling3D(pool_size=(2,2,2))(conv2)

    conv3 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(pool2)
    conv3 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv3)
    pool3 = MaxPooling3D(pool_size=(2,2,2))(conv3)

    conv4 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(pool3)
    conv4 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(conv4)
    pool4 = MaxPooling3D(pool_size=(2,2,2))(conv4)

    conv5 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(pool4)
    conv5 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(conv5)

    up6 = concatenate([Conv3DTranspose(128, (2,2,2), strides=(2,2,2), padding='same')(conv5), conv4], axis=-1)
    conv6 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(up6)
    conv6 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(conv6)

    up7 = concatenate([Conv3DTranspose(64, (2,2,2), strides=(2,2,2), padding='same')(conv6), conv3], axis=-1)
    conv7 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(up7)
    conv7 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv7)

    up8 = concatenate([Conv3DTranspose(32, (2,2,2), strides=(2,2,2), padding='same')(conv7), conv2], axis=-1)
    conv8 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(up8)
    conv8 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv8)

    up9 = concatenate([Conv3DTranspose(16, (2,2,2), strides=(2,2,2), padding='same')(conv8), conv1], axis=-1)
    conv9 = Conv3D(16, (3, 3, 3), activation='relu', padding='same')(up9)
    conv9 = Conv3D(16, (3, 3, 3), activation='relu', padding='same')(conv9)

    conv10 = Conv3D(1, (1, 1, 1), activation='softmax')(conv9)



    model = Model(inputs=[inputs], outputs=[conv10])

    if multi_gpu:
        model = multi_gpu_model(model,nbGPU)

    model.summary()
    model.compile(optimizer=Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])

    return model

问题在于,当我开始练习时,准确率始终保持在0.0,而损失始终保持在20.0。

为解决该问题,我将Sigmoid函数替换为softmax函数,并且训练正常进行。

但是我真的很想使用softmax函数。 为什么Sigmoid效果很好,而softmax效果不好?

0 个答案:

没有答案