我想预处理庞大的图像数据集(600k),用于训练模型。但是,这占用了太多内存,因此我一直在寻找解决方案,但没有一个适合我的问题。这是我的代码的一部分。我仍然是深度学习的新手,我认为在预处理数据方面做得不好。如果有人知道如何解决此内存问题,将不胜感激。
# Read the CSV File
data_frame = pd.read_csv("D:\\Downloads\\ndsc-beginner\\train.csv")
#Load the image
def load_image(img_path, target_size=(256, 256)):
#Check if the img_path has .jpg behind the name
if img_path[-4:] != '.jpg':
# Load the image
img = load_img(img_path+'.jpg',
target_size=target_size, grayscale=True)
else:
#Load the image
img = load_img(img_path, target_size=target_size, grayscale=True)
# Convert to a numpy array
return img_to_array(img)
IMG_SIZE = 256
image_arr = []
# Get the category column values
category_id = data_frame['Category']
# Change the category to one-hot - has 50 categories
dummy_cat_id = keras.utils.np_utils.to_categorical(category_id, 50)
# Get the image paths column values
path_list = data_frame.iloc[1:, -1]
# Batch generator
def batch_gen(data, batch_size):
for i in range(0, len(data), batch_size):
yield data[i:i+batch_size]
# Append the numpy array(img) and category label into an array.
def extract_data(data_frame):
total_count = len(path_list)
batch_size = 1000
index = 0
for path in batch_gen(path_list,batch_size):
for mini_path in path:
image_arr.append([load_image(mini_path), dummy_cat_id[index]])
print(index)
index+= 1
#extract_data(data_frame)
random.shuffle(image_arr)
# Features and Labels for training data
trainImages = np.array([i[0] for i in image_arr]
).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
trainLabels = np.array([i[1] for i in image_arr])
trainImages = trainImages.astype('float32')
trainImages /= 255.0
答案 0 :(得分:0)
我看到在预处理中,您只是使图像灰度化并对其进行规范化。 如果您使用的是Keras,则可以使用以下内容进行归一化以及将图像转换为灰度图像 确保提供包含包含图像的类文件夹的路径。您可以将
更改为分类模式train_datagen = ImageDataGenerator(rescale=1./255)
train_gen = train_datagen.flow_from_directory(
f'{readPath}/training/',
target_size=(100,100),
color_mode='grayscale',
batch_size=32,
classes=['cat','dog'],
class_mode='binary'
)
要进行训练,您可以使用model.fit_generator()函数