使用keras imagedatagenerator加载数据,并构建具有keras图层的Unet模型,
如何解决值错误以及如何重塑输入图像?
下面是代码:
#def gen(train_image,train_mask,val_images,val_mask):
data_gen_args = dict(#featurewise_center=True,
#featurewise_std_normalization=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
vimage_datagen = ImageDataGenerator(**data_gen_args)
vmask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
#image_datagen.fit(images, augment=True, seed=seed)
image_generator = image_datagen.flow_from_directory(
'/content/drive/My Drive/2019 github projects/dataset1/train_images',
class_mode=None,target_size=(256,256),
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
'/content/drive/My Drive/2019 github projects/dataset1/train_masl',
class_mode=None,target_size=(256,256),
seed=seed)
vimage_generator = image_datagen.flow_from_directory(
'/content/drive/My Drive/2019 github projects/dataset1/test',
class_mode=None,target_size=(256, 256),
seed=seed)
vmask_generator = mask_datagen.flow_from_directory(
'/content/drive/My Drive/2019 github projects/dataset1/mask_test',
class_mode=None,target_size=(256, 256),seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
val_generator = zip(vimage_generator,vmask_generator)
deaclred unet model
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5', monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(train_generator,steps_per_epoch=200,epochs=5,callbacks=[model_checkpoint])
以下是我面临的错误:
ValueError: Input arrays should have the same number of samples as target arrays. Found 15 input samples and 32 target samples.