我尝试使用U-net进行语义分割问题。遮罩图像是二进制的。但是当训练时,我发现我的损失是负数。在这里我用binary_crossentropy损失。 这是我的代码:
X_train = X_train /255
y_train = y_train /255
X_val = X_val/255
y_val = y_val/255
所有类型均为np.float32
然后我使用imageDataGenerator扩展图像,代码如下:
def image_augmentation(X_train,y_train):
# Set hyper parameters for the model.
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90.,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
seed = 42
image_datagen.fit(X_train, augment=True, seed=seed)
mask_datagen.fit(y_train, augment=True, seed=seed)
image_generator = image_datagen.flow(
X_train,batch_size=8,
seed=seed)
mask_generator = mask_datagen.flow(
y_train, batch_size=8,
seed=seed)
while True:
yield(image_generator.next(),mask_generator.next())
train_generator = image_augmentation(X_train,y_train)
pat_init = 50
pat = pat_init
learning_rate = 1e-4
##change the model weight you want
file_path = "./model_v1/improvement-{epoch:02d}-{val_my_iou_metric:.5f}.hdf5"
checkpoint = ModelCheckpoint(file_path,monitor = 'val_my_iou_metric',verbose=1,save_best_only=True,mode='max')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', mode = 'auto',factor=0.5, patience=5, min_lr=1e-9, verbose=1)
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=learning_rate), metrics=[my_iou_metric])
# Use the image data Augment below to achieve better result
model.fit_generator(
train_generator,steps_per_epoch=2000,epochs=300,
validation_data=(X_val, y_val), verbose=1,
callbacks=[checkpoint,reduce_lr]
)
“我的网络”的最后一层定义如下:
output = Conv2D(1,activation='sigmoid',
kernel_size=(1,1),
padding='same',
data_format='channels_last')(x)
我真的很好奇为什么会这样? sigmoid函数的输出在0到1之间吗?
如果您有任何想法,请与我讨论。 非常感谢!
答案 0 :(得分:-1)
samplewise_center=True,
samplewise_std_normalization=True
在imagedatagenerator中