我的情况是,我们有多个具有各自数据的同级项,它们位于不同的目录中,并且具有相同的子目录结构。我想使用这些数据来训练模型,但是如果我将所有数据都复制到一个文件夹中,则无法跟踪来自哪个数据的数据(新数据也会偶尔创建,因此不适合保留复制文件每次) 我的数据现在存储如下:
-user01
-user02
-user03
...
(它们都有相似的子目录结构)
我一直在寻找解决方案,但是我只在here和here中找到了多输入的情况,它们将多个输入连接成1个并行输入,这不是我的情况。
我知道flow_from_directory()
一次只能由一个目录提供,那么如何构建一个可以一次由多个目录提供的自定义目录呢?
如果我的问题是低质量的,请提出改进建议,我也在keras的github上进行了搜索,但没有找到我可以适应的东西。
谢谢。
答案 0 :(得分:1)
Keras ImageDataGenerator flow_from_directory
方法具有一个follow_links
参数。
也许您可以创建一个目录,该目录中包含指向其他所有目录中文件的符号链接。
此堆栈问题讨论了如何在Keras ImageDataGenerator中使用符号链接:Understanding 'follow_links' argument in Keras's ImageDataGenerator?
答案 1 :(得分:1)
经过这么多天,我希望您找到了解决问题的方法, 但我会在这里分享另一个想法,以便像我这样的新朋友 将来遇到同样的问题,请寻求帮助。
几天前,我遇到了此类问题。如user3731622所说,follow_links
将是您的问题的解决方案。另外,我认为合并两个数据生成器的想法将可行。但是,在这种情况下,必须根据每个相关目录中的数据范围来确定相应数据生成器的批处理大小。
子发电机的批量大小:
Where,
b = Batch Size Of Any Sub-generator
B = Desired Batch Size Of The Merged Generator
n = Number Of Images In That Directory Of Sub-generator
the sum of n = Total Number Of Images In All Directories
请参见下面的代码,这可能会有所帮助:
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import Sequence
import matplotlib.pyplot as plt
import numpy as np
import os
class MergedGenerators(Sequence):
def __init__(self, batch_size, generators=[], sub_batch_size=[]):
self.generators = generators
self.sub_batch_size = sub_batch_size
self.batch_size = batch_size
def __len__(self):
return int(
sum([(len(self.generators[idx]) * self.sub_batch_size[idx])
for idx in range(len(self.sub_batch_size))]) /
self.batch_size)
def __getitem__(self, index):
"""Getting items from the generators and packing them"""
X_batch = []
Y_batch = []
for generator in self.generators:
if generator.class_mode is None:
x1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
else:
x1, y1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
Y_batch = [*Y_batch, *y1]
if self.generators[0].class_mode is None:
return np.array(X_batch)
return np.array(X_batch), np.array(Y_batch)
def build_datagenerator(dir1=None, dir2=None, batch_size=32):
n_images_in_dir1 = sum([len(files) for r, d, files in os.walk(dir1)])
n_images_in_dir2 = sum([len(files) for r, d, files in os.walk(dir2)])
# Have to set different batch size for two generators as number of images
# in those two directories are not same. As we have to equalize the image
# share in the generators
generator1_batch_size = int((n_images_in_dir1 * batch_size) /
(n_images_in_dir1 + n_images_in_dir2))
generator2_batch_size = batch_size - generator1_batch_size
generator1 = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
rotation_range=5.,
horizontal_flip=True,
)
generator2 = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2,
horizontal_flip=False,
)
# generator2 has different image augmentation attributes than generaor1
generator1 = generator1.flow_from_directory(
dir1,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator1_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
generator2 = generator2.flow_from_directory(
dir2,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator2_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
return MergedGenerators(
batch_size,
generators=[generator1, generator2],
sub_batch_size=[generator1_batch_size, generator2_batch_size])
def test_datagen(batch_size=32):
datagen = build_datagenerator(dir1="./asdf",
dir2="./asdf2",
batch_size=batch_size)
print("Datagenerator length (Batch count):", len(datagen))
for batch_count, image_batch in enumerate(datagen):
if batch_count == 1:
break
print("Images: ", image_batch.shape)
plt.figure(figsize=(10, 10))
for i in range(image_batch.shape[0]):
plt.subplot(1, batch_size, i + 1)
plt.imshow(image_batch[i], interpolation='nearest')
plt.axis('off')
plt.tight_layout()
test_datagen(4)