我正在尝试将Keras的递归模型(此处为ConvLSTM2D)用于我的神经网络。我的目标是从不同角度拍摄两张同一物体的图像,然后根据这两张图像中的特征,尝试确定它是哪种物体。
图像被生成并存储在单独的文件夹中,但是路径相似,例如:
-dir1 -> class1 -> image001.jpg
-> class2 -> image101.jpg
-dir2 -> class1 -> image001.jpg
-> class2 -> image101.jpg
dir1和dir2中的image001.jpg是同一对象,但角度不同。
我的问题:
ConvLSTM2D拍摄的图像具有额外的时间维度。所以我的图像曾经是(128,128,1),现在我将使用np.stack将其转换为(2,128,128,1)。但是,我不知道如何:
我的尝试
我已经尝试使用here描述的解决方案来解决问题。据我了解,保持种子相同将确保两个图像属于同一对象(?)。但是,当模型尝试使用图像时会出现问题。 Keras具有以下输出消息:
ValueError: Error when checking model input: the list of Numpy arrays that you are
passing to your model is not the size the model expected. Expected to see 1 array(s)
, but instead got the following list of 2 arrays:
[array([[[[0.07843138],
[0.02745098],
[0.07450981],
...,
[0.02745098],
[0.03137255],
[0.0509804 ]],
[[0.05490196],
[0.10980393],...
这使我认为我需要使用numpy.stack堆叠两个图像,这是我接下来要尝试的。但是,这导致了以下错误:
ValueError: Error when checking input: expected conv_lst_m2d_1_input to have shape
(2, 128, 128, 1) but got array with shape (100, 128, 128, 1)
我可以理解此错误,因为我已声明输入的尺寸为(None,no_of_images,width,height,channels)。由于它是双峰图像,因此图像数为2。模型获得的实际输入为(100,128,128,1),其中第一个元素是批次大小。因此,我认为我没有将两个图像正确地堆叠到时间轴中。
这让我感到困惑和迷茫,如何解决这个问题。我已经很好地定义了模型,使得输入形状为(samples = None,2,128,128,1),但是我不知道如何将两个图像转换为相同格式。
我的代码
模型
model = Sequential()
no_of_img = 2
#Adding additional convolution + maxpool layers 15/1/19
model.add(ConvLSTM2D(32, (5,5), batch_input_shape=(batch_size,no_of_img,img_width,img_height,1),return_sequences=True))
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(TimeDistributed(MaxPooling2D((2,2))))
model.add(ConvLSTM2D(64, (3,3),return_sequences=True))
model.add(Activation('relu'))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2))))
model.add(Dropout(0.2))
model.add(ConvLSTM2D(128, (3,3),return_sequences=True))
model.add(Activation('relu'))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2))))
model.add(Dropout(0.2))
model.add(ConvLSTM2D(256, (3,3),return_sequences=True))
model.add(Activation('relu'))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2,2))))
model.add(Dropout(0.2))
model.add(Flatten())
#Possible dense layer with our 128x128 number of pixels is too much, too high. We should add a few convolutional and maxpool layers beforehand.
model.add(Dense(128, #dimensionality of output space
#input_shape=(128,128,1), #Commented out as only the first layer needs input shape.
))
model.add(Activation('relu'))
#model.add(Dropout(0.2))
#model.add(Dense(num_classes, activation='softmax'))
model.add(Dense(num_classes,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='RMSProp',metrics=['accuracy'])
当前尝试进行数据准备
train_datagen = ImageDataGenerator(
rescale=1./255,
#Normalized inputs from 0-255 to 0-1
horizontal_flip=True,
vertical_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
def generate_generator_multiple(generator,dir1, dir2, batch_size,
img_width,img_height,subset):
genX1 = generator.flow_from_directory(dir1,
color_mode='grayscale',
target_size=
(img_width,img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False,
subset=subset,
seed=1)
#Same seed for consistency.
genX2 = generator.flow_from_directory(dir2,
color_mode='grayscale',
target_size=
(img_width,img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle=False,
subset=subset,
seed=1)
while True:
X1i = genX1.next()
X2i = genX2.next()
yield numpy.stack((X1i[0],X2i[0])),X1i[1] #Yields both images and their mutual label
train_generator = generate_generator_multiple(generator=train_datagen,
dir1=train_data_dirA,
dir2=train_data_dirB,
batch_size=batch_size,
img_width=img_width,
img_height=img_height,
subset='training')
validation_generator
=generate_generator_multiple(generator=test_datagen,
dir1=train_data_dirA,
dir2=train_data_dirB,
batch_size=batch_size,
img_width=img_width,
img_height=img_height,
subset='validation')
答案 0 :(得分:0)
弄清楚了。在使用np.stack之后以格式(时间,批处理大小,宽度,高度,通道)输出结果。我通过使用np.transpose切换了时间和batch_size的通道,并将图像输入到模型中。