我想使用CNN方法执行分段,并使用keras ImageDataGenerator生成更多数据并将其馈送到我的网络中。 每当我运行代码时,都会出现此错误
文件“ C:\ Users \ abirf \ AppData \ Local \ Continuum \ anaconda3 \ envs \ deep_learning \ lib \ site-packages \ numpy \ core \ shape_base.py”,行434,在堆栈中 返回_nx.concatenate(expanded_arrays,axis = axis,out = out)
文件“ << strong> array_function internals>”,第6行,串联在一起
MemoryError:无法为形状(1、128、128)和数据类型float32的数组分配64.0 KiB
到底是什么问题?
这是我的代码段
X_path= os.path.join('.........../train_data/', 'images') # input image
Y_path = os.path.join('........./train_data/', 'masks') # ground-truth label
# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=45.,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=[0.2])
seed = 1
image_datagen =ImageDataGenerator(**data_gen_args)
mask_datagen =ImageDataGenerator(**data_gen_args)
image_generator = mask_datagen.flow_from_directory(X_path, class_mode=None,batch_size=16, seed=seed,target_size=(img_col,img_row),color_mode='grayscale')
mask_generator = mask_datagen.flow_from_directory(Y_path, class_mode=None,batch_size=16, seed=seed,target_size=(img_col, img_row),color_mode='grayscale')
train_generator = zip(image_generator, mask_generator)
num_train = len(image_generator)
#########################################################
this contains the architecture used to perform the training
#########################################################
history = model.fit(list(train_generator), steps_per_epoch=num_train, shuffle=True, validation_split=0.1 , batch_size=16, epochs=50,callbacks=[earlystopper, checkpointer])