如何为keras修复“ AttributeError:'Tensor'对象没有属性'set_weights'”错误

时间:2019-08-09 22:14:06

标签: python tensorflow keras checkpoint

我目前正在尝试将经过张量流训练的网络的权重加载到keras中的等效网络中。问题是一旦读取权重,当我尝试在每个图层上使用“ .set_weights”命令时,就会发生上述错误。当我只使用keras图层时,我不确定为什么我的图层属于“ Tensor”类类型。

在下面您可能会看到代码,我已经从tensorflow加载了元文件,并从检查点文件加载了权重。用keras构建网络之后,我尝试加载权重,它告诉我我的图层属于“ Tensor”类。

这是用于将权重加载到“ model_vars”数组中。

以tf.Session()作为会话:

# import graph
saver = tf.train.import_meta_graph(PATH_REL_META)

# load weights for graph
saver.restore(sess, PATH_REL_META[:-5])

# get all global variables (including model variables)
vars_global = tf.global_variables()

# get their name and value and put them into dictionary
sess.as_default()
model_vars = {}
for var in vars_global:
    try:

        model_vars[var.name] = var.eval()

    except:
        print("For var={}, an exception occurred".format(var.name))

此代码中使用的通用U-Net结构:

inputs = Input((None,None,10))

conv_A1c = Conv2D(过滤器= 64,kernel_size = [3,3],padding =“ same”,activation ='relu')(输入) bn_conv_A1c = BatchNormalization(轴= 3)(conv_A1c) dropout_A1c =退出(0.5)(bn_conv_A1c)

conv_A2c = Conv2D(过滤器= 64,kernel_size = [3,3],填充=“相同”,激活='relu')(dropout_A1c) bn_conv_A2c = BatchNormalization(轴= 3)(conv_A2c) dropout_A2c =退出(0.5)(bn_conv_A2c)

pool_A1c = MaxPooling2D(pool_size = [2,2],步幅= 2)(dropout_A2c)

conv_B1c = Conv2D(过滤器= 128,kernel_size = [3,3],padding =“ same”,激活='relu')(pool_A1c) bn_conv_B1c = BatchNormalization(轴= 3)(conv_B1c) dropout_B1c =退出(0.5)(bn_conv_B1c)

conv_B2c = Conv2D(过滤器= 128,kernel_size = [3,3],padding =“ same”,激活='relu')(dropout_B1c) bn_conv_B2c = BatchNormalization(轴= 3)(conv_B2c) dropout_B2c =退出(0.5)(bn_conv_B2c)

pool_B1c = MaxPooling2D(pool_size = [2,2],步幅= 2)(dropout_B2c)

conv_C1c = Conv2D(过滤器= 256,kernel_size = [3,3],padding =“ same”,激活='relu')(pool_B1c) bn_conv_C1c = BatchNormalization(轴= 3)(conv_C1c) dropout_C1c =退出(0.5)(bn_conv_C1c)

conv_C2c = Conv2D(过滤器= 256,kernel_size = [3,3],padding =“ same”,激活='relu')(dropout_C1c) bn_conv_C2c = BatchNormalization(轴= 3)(conv_C2c) dropout_C2c =退出(0.5)(bn_conv_C2c)

pool_C1c = MaxPooling2D(pool_size = [2,2],步幅= 2)(dropout_C2c)

conv_D1c = Conv2D(过滤器= 512,kernel_size = [3,3],padding =“ same”,激活='relu')(pool_C1c) bn_conv_D1c = BatchNormalization(轴= 3)(conv_D1c) dropout_D1c =退出(0.5)(bn_conv_D1c)

conv_D2c = Conv2D(过滤器= 512,kernel_size = [3,3],填充=“相同”,激活='relu')(dropout_D1c) bn_conv_D2c = BatchNormalization(轴= 3)(conv_D2c) dropout_D2c =退出(0.5)(bn_conv_D2c)

pool_D1c = MaxPooling2D(pool_size = [2,2],步幅= 2)(dropout_D2c)

conv_E1 = Conv2D(过滤器= 1024,kernel_size = [3,3],填充=“相同”,激活='relu')(pool_D1c) bn_conv_E1 = BatchNormalization(轴= 3)(conv_E1) dropout_E1 =退出(0.5)(bn_conv_E1)

conv_E2 = Conv2D(过滤器= 1024,kernel_size = [3,3],填充=“相同”,激活='relu')(dropout_E1) bn_conv_E2 = BatchNormalization(轴= 3)(conv_E2) dropout_E2 =退出(0.5)(bn_conv_E2)

upconv_E1 = Conv2DTranspose(过滤器= 512,kernel_size = [2,2],步幅=(2,2),填充='有效')(dropout_E2)

conv_D1e_ip =串联(axis = 3)([dropout_D2c,upconv_E1]) conv_D1e = Conv2D(过滤器= 512,kernel_size = [3,3],padding =“ same”,激活='relu')(conv_D1e_ip) bn_conv_D1e = BatchNormalization(轴= 3)(conv_D1e) dropout_D1e = Dropout(0.5)(bn_conv_D1e)

conv_D2e = Conv2D(过滤器= 512,kernel_size = [3,3],填充=“相同”,激活='relu')(dropout_D1e) bn_conv_D2e = BatchNormalization(轴= 3)(conv_D2e) dropout_D2e = Dropout(0.5)(bn_conv_D2e)

upconv_D2e = Conv2DTranspose(过滤器= 256,kernel_size = [2,2],步幅=(2,2),填充='有效')(dropout_D2e)

conv_C1e_ip =串联(axis = 3)([dropout_C2c,upconv_D2e]) conv_C1e = Conv2D(过滤器= 256,kernel_size = [3,3],padding =“ same”,激活='relu')(conv_C1e_ip) bn_conv_C1e = BatchNormalization(轴= 3)(conv_C1e) dropout_C1e =退出(0.5)(bn_conv_C1e)

conv_C2e = Conv2D(过滤器= 256,kernel_size = [3,3],padding =“ same”,激活='relu')(dropout_C1e) bn_conv_C2e = BatchNormalization(轴= 3)(conv_C2e) dropout_C2e = Dropout(0.5)(bn_conv_C2e)

upconv_C2e = Conv2DTranspose(过滤器= 128,kernel_size = [2,2],步幅=(2,2),填充='有效')(dropout_C2e)

conv_B1e_ip =串联(axis = 3)([dropout_B2c,upconv_C2e]) conv_B1e = Conv2D(过滤器= 128,kernel_size = [3,3],padding =“ same”,激活='relu')(conv_B1e_ip) bn_conv_B1e = BatchNormalization(轴= 3)(conv_B1e) dropout_B1e =退出(0.5)(bn_conv_B1e)

conv_B2e = Conv2D(过滤器= 128,kernel_size = [3,3],padding =“ same”,激活='relu')(dropout_B1e) bn_conv_B2e = BatchNormalization(轴= 3)(conv_B2e) dropout_B2e = Dropout(0.5)(bn_conv_B2e)

upconv_B2e = Conv2DTranspose(过滤器= 64,kernel_size = [2,2],步幅=(2,2),填充='有效')(dropout_B2e)

conv_A1e_ip =串联(axis = 3)([dropout_A2c,upconv_B2e]) conv_A1e = Conv2D(过滤器= 64,kernel_size = [3,3],padding =“ same”,激活='relu')(conv_A1e_ip) bn_conv_A1e = BatchNormalization(轴= 3)(conv_A1e) dropout_A1e = Dropout(0.5)(bn_conv_A1e)

conv_A2e = Conv2D(过滤器= 64,kernel_size = [3,3],padding =“ same”,activation ='relu')(dropout_A1e) bn_conv_A2e = BatchNormalization(轴= 3)(conv_A2e) dropout_A2e = Dropout(0.5)(bn_conv_A2e)

conv_score = Conv2D(2,kernel_size = [1,1],padding =“ same”,activation ='relu')(dropout_A2e)

model = Model(输入= [输入],输出= [conv_score]) model.compile(optimizer ='adam',loss ='binary_crossentropy')

如何尝试将权重加载到图层中的示例

conv_A1c.set_weights([model_vars [“ conv2d / kernel:0”],model_vars [“ conv2d / bias:0”]])

bn_conv_A1c.set_weights([model_vars [“ batch_normalization / gamma:0”],model_vars [“ batch_normalization / beta:0”],model_vars [“ batch_normalization / moving_mean:0”],model_vars [“ batch_normalization / moving_variance:0” ]])

“ model_vars”中的项目均为“”类型,而图层的类型返回“”

如果你们可以帮我弄清楚为什么我的图层返回张量,而不是用keras类型的图层会很有帮助。

1 个答案:

答案 0 :(得分:0)

试试这个:

model.layers[0].set_weights([model_vars["conv2d/kernel:0"], model_vars["conv2d/bias:0"]])