模型的输出张量必须是模型Api Tensorfow

时间:2019-06-08 13:01:16

标签: python tensorflow

def generator_model(self):

        input_images = Input(shape=[64,64,1])
        layer1= Conv2D(self.filter_size,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(input_images)
        layer1=LeakyReLU(0.2)(layer1)

        layer2= Conv2D(self.filter_size*2,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer1)
        layer2=BatchNormalization()(layer2)
        layer2=LeakyReLU(0.2)(layer2)

        layer3=Conv2D(self.filter_size*4,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer2)
        layer3=BatchNormalization()(layer3)
        layer3=LeakyReLU(0.2)(layer3) 

        layer4=Conv2D(self.filter_size*8,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer3)
        layer4=BatchNormalization()(layer4)
        layer4=LeakyReLU(0.2)(layer4)  

        layer5=Conv2D(self.filter_size*16,self.kernel_size,(2,2),padding='same',use_bias=False,kernel_initializer='random_uniform')(layer4)
        layer5=BatchNormalization()(layer5)
        layer5=LeakyReLU(0.2)(layer5)            

        up_layer5 = Conv2DTranspose(self.filter_size*8,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(layer5)
        up_layer5=BatchNormalization()(up_layer5)
        up_layer5=LeakyReLU(0.2)(up_layer5)
        #shape = 4*4*512
        up_layer5_concat = tf.concat([up_layer5,layer4],0)

        up_layer6 = Conv2DTranspose(self.filter_size*4,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer5_concat)
        up_layer6 =BatchNormalization()(up_layer6)
        up_layer6 =LeakyReLU(0.2)(up_layer6)
                    up_layer_6_concat = tf.concat([up_layer6,layer3],0)

        up_layer7 = Conv2DTranspose(self.filter_size*2,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer_6_concat)
        up_layer7 =BatchNormalization()(up_layer7)
        up_layer7 =LeakyReLU(0.2)(up_layer7)
        up_layer_7_concat = tf.concat([up_layer7,layer2],0)

        up_layer8 = Conv2DTranspose(self.filter_size,self.kernel_size,strides = (2,2),padding='same',use_bias=False)(up_layer_7_concat)
        up_layer8 =BatchNormalization()(up_layer8)
        up_layer8 =LeakyReLU(0.2)(up_layer8)
        up_layer_8_concat = tf.concat([up_layer8,layer1],0)    
        output = Conv2D(3,self.kernel_size,strides = (1,1),padding='same',use_bias=False)(up_layer_8_concat)
        final_output = LeakyReLU(0.2)(output)

        model = Model(input_images,output)
        model.summary()
        return model

这是我的generator_model的外观,并且我遵循了一篇研究论文来构建体系结构。但是,我对错误有疑问。我已经在SO中检查了针对给定问题的其他解决方案,但是它们对我没有用,因为它们可能有点不同。我的猜测是,tf.concat()函数存在问题,该函数应作为Lambda的tensorflow keras层放置,但我也尝试了这一点,但没有帮助。关于这个问题有帮助吗?烦我两天了。

1 个答案:

答案 0 :(得分:1)

使用Keras功能API定义模型时,您必须使用Keras图层来构建模型。

因此,您是对的,问题出在您的tf.concat调用中。

但是,在tf.keras.layers包中,您可以找到Concatenate层,它也使用了功能性API。

因此,您可以从以下位置替换concat层:

up_layer5_concat = tf.concat([up_layer5,layer4],0)

up_layer5_concat = tf.keras.layers.Concatenate()([up_layer5, layer4])

对于您网络中的所有其他tf.concat调用,依此类推