我正在用keras设置一些镜头图像分割。我的第一个网络(类似于VGG)输出形状为[2,1024]的特征向量,该特征向量将前景和背景信息分开。
我想将一个原型输出(固定图像)与所有其他图像的平均特征向量进行比较。我的损失函数是最近的邻居实现,用于更新权重。
我的GPU最多只能拍摄4张图像。准确的分辨率。
我的问题是,训练时如何根据所有图像的平均输出来计算损失。
每一个提示都会很棒。
输入是图像,其标记的前景和背景的标记分别为[2,512,512,1]
def conv_model(self, input):
c1 = Conv2D(self.feat_lev_1, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (input) # added input shape
c1 = BatchNormalization()(c1)
c1 = Conv2D(self.feat_lev_1, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c1)
c1 = BatchNormalization()(c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(self.feat_lev_2, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p1)
c2 = BatchNormalization()(c2)
c2 = Conv2D(self.feat_lev_2, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c2)
c2 = BatchNormalization()(c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(self.feat_lev_3, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p2)
c3 = BatchNormalization()(c3)
c3 = Conv2D(self.feat_lev_3, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c3)
c3 = BatchNormalization()(c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(self.feat_lev_4, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p3)
c4 = BatchNormalization()(c4)
c4 = Conv2D(self.feat_lev_4, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c4)
c4 = BatchNormalization()(c4)
# added
#d4 = Dropout(0.5)(c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(self.feat_lev_5, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p4)
c5 = BatchNormalization()(c5)
c5 = Conv2D(self.feat_lev_5, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c5)
c5 = BatchNormalization()(c5)
#c5 = Dropout(0.3) (c5) # 0.5
result = GlobalAveragePooling2D()(c5)# activation='softmax'
return result
我已经使用角膜的预测功能提取均值向量。但是运行网络始终会单独使用输入图像。
# the prototypical image
prototype = model.predict(xv)
learner_train = DataGenerator_Learner(X, Y, prototype)
# create mean prototype
prototypes = []
for i in range(len(X)):
element = learner_train.get_data(i)
prototypes.append(model.predict(element))
mean_prototype = tf.add_n(prototypes) / len(X)
result = model.fit_generator(generator=learner_train, epochs=num_epochs, callbacks=[checkpointer, earlystopper], steps_per_epoch=1)