我有来自Retina Unet的Unet模型,但我增加了图像和面具。现在?它给了我这个错误ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
我想训练增强(图像和蒙版)并验证增强图像和蒙版。
批量生成功能:
def batch_generator(X_gen,Y_gen):
yield(X_batch,Y_batch)
model = get_unet(1,img_width,img_hight) #the U-net model
print("Model Summary")
print(model.summary())
print "Check: final output of the network:"
print model.output_shape
#============ Training ==================================
checkpointer = ModelCheckpoint(filepath='./'+'SAEED'+'_best_weights.h5', verbose=2, monitor='val_acc', mode='auto', save_best_only=True) #save at each epoch if the validation decreased
print("Now augumenting training")
datagen = ImageDataGenerator(rotation_range=120)
#traing augumentation.
train_images_generator = datagen.flow_from_directory(train_images_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
train_mask_generator = datagen.flow_from_directory(train_masks_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
print("Now augumenting val")
#val augumentation.
val_images_generator = datagen.flow_from_directory(val_images_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
val_masks_generator = datagen.flow_from_directory(val_masks_dir,target_size=(img_width,img_hight),batch_size=30,class_mode=None)
print("Now augumenting test")
#test augumentation
test_images_generator = datagen.flow_from_directory(test_images_dir,target_size=(img_width,img_hight),batch_size=25,class_mode=None)
test_masks_generator = datagen.flow_from_directory(test_masks_dir,target_size=(img_width,img_hight),batch_size=25,class_mode=None)
#fitting model.
print("Now fitting the model ")
#model.fit_generator(train_generator,samples_per_epoch = nb_train_samples*2,nb_epoch=nb_epoch,validation_data=val_generator,nb_val_samples=nb_val_samples,callbacks=[checkpointer])
print("train_images_generator size {} and type is {}".format(next(train_images_generator).shape,type(next(train_images_generator))))
print("train_masks_generator size {} and type is {}".format(next(train_mask_generator).shape,type(next(train_mask_generator))))
model.fit_generator(batch_generator(train_images_generator,train_mask_generator),samples_per_epoch = nb_train_samples,nb_epoch=nb_epoch,validation_data=batch_generator(val_images_generator,val_masks_generator),nb_val_samples=nb_val_samples,callbacks=[checkpointer])
print("Finished fitting the model")
` 模型摘要:
`
Model Summary
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 160, 160) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 32, 160, 160) 320 input_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32, 160, 160) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 32, 160, 160) 9248 dropout_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 32, 80, 80) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 64, 80, 80) 18496 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 64, 80, 80) 0 convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 64, 80, 80) 36928 dropout_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 64, 40, 40) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 128, 40, 40) 73856 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 128, 40, 40) 0 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 128, 40, 40) 147584 dropout_3[0][0]
____________________________________________________________________________________________________
upsampling2d_1 (UpSampling2D) (None, 128, 80, 80) 0 convolution2d_6[0][0]
____________________________________________________________________________________________________
merge_1 (Merge) (None, 192, 80, 80) 0 upsampling2d_1[0][0]
convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 64, 80, 80) 110656 merge_1[0][0]
____________________________________________________________________________________________________
dropout_4 (Dropout) (None, 64, 80, 80) 0 convolution2d_7[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 64, 80, 80) 36928 dropout_4[0][0]
____________________________________________________________________________________________________
upsampling2d_2 (UpSampling2D) (None, 64, 160, 160) 0 convolution2d_8[0][0]
____________________________________________________________________________________________________
merge_2 (Merge) (None, 96, 160, 160) 0 upsampling2d_2[0][0]
convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D) (None, 32, 160, 160) 27680 merge_2[0][0]
____________________________________________________________________________________________________
dropout_5 (Dropout) (None, 32, 160, 160) 0 convolution2d_9[0][0]
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 32, 160, 160) 9248 dropout_5[0][0]
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 2, 160, 160) 66 convolution2d_10[0][0]
____________________________________________________________________________________________________
reshape_1 (Reshape) (None, 2, 25600) 0 convolution2d_11[0][0]
____________________________________________________________________________________________________
permute_1 (Permute) (None, 25600, 2) 0 reshape_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 25600, 2) 0 permute_1[0][0]
====================================================================================================
Total params: 471,010
Trainable params: 471,010
Non-trainable params: 0
`
有什么想法吗?谢谢。
答案 0 :(得分:3)
如果有人后来遇到同样的问题。
问题是发电机问题。固定在
之下 def batch_generator(X_gen,Y_gen):
while true:
yield(X_gen.next(),Y_gen.next())
答案 1 :(得分:0)
就我而言,向生成器添加 class_mode 解决了该问题。 例如:
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='categorical')
您可以选择:
binary:一维二进制标签的numpy数组
categorical:一键编码标签的二维numpy数组。支持多标签输出。
sparse:一维整数标签的numpy数组
input:与输入图像相同的图像(主要用于自动编码器)
other:y_col数据的numpy数组
顺便说一句,没有一个也应该工作。.但这是我的解决方法