如何在训练张量流期间设置图层权重

时间:2020-10-07 21:34:18

标签: python tensorflow machine-learning tensorflow1.15

在模型的每个前向传递中,我想在softmax层的列上实现l2归一化,然后按照the imprinted weights paper和此pytorch implementation设置权重。我正在使用layer.set_weights()在模型的call()函数期间设置归一化的权重,但是此实现仅适用于热切的执行,因为构建图形时layer.set_weights()出了点问题。

这是tf 1.15中模型的实现:

import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import Model
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense

class Extractor(Model):
    def __init__(self, input_shape):
        super(Extractor, self).__init__()
        self.basenet = ResNet50(include_top=False, weights="imagenet", 
                                 pooling="avg", input_shape=input_shape)

    def call(self, x):
        x = self.basenet(x)
        return(x)


class Embedding(Model):
    def __init__(self, num_nodes, norm=True):
        super(Embedding, self).__init__()
        self.fc = Dense(num_nodes, activation="relu")
        self.norm = norm
    
    def call(self, x):
        x = self.fc(x)
        if self.norm:
            x = tf.nn.l2_normalize(x)
        return x

class Classifier(Model):
    def __init__(self, n_classes, norm=True, bias=False):
       super(Classifier, self).__init__()
       self.n_classes = n_classes
       self.norm = norm
       self.bias = bias
    
    def build(self, inputs_shape):
       self.prediction = Dense(self.n_classes, 
                               activation="softmax",use_bias=False)
    
    def call(self, x):
        if self.norm:
            w = self.prediction.trainable_weights
            if w:
                w = tf.nn.l2_normalize(w, axis=2)
                self.prediction.set_weights(w)    
       
        x = self.prediction(x)
        return x 

class Net(Model):
    def __init__(self, input_shape, n_classes, num_nodes, norm=True, 
                 bias=False):
        super(Net, self).__init__()
        self.n_classes = n_classes
        self.num_nodes = num_nodes
        self.norm = norm
        self.bias = bias
        self.extractor = Extractor(input_shape)
        self.embedding = Embedding(self.num_nodes, norm=self.norm)
        self.classifier = Classifier(self.n_classes, norm=self.norm, 
                                     bias=self.bias)
    
    
    def call(self, x):
        x = self.extractor(x)
        x = self.embedding(x)
        x = self.classifier(x)
        return x

权重归一化可以在Classifier类的调用步骤中找到,在归一化后,我在其中调用.set_weights()。

使用model = Net(input_shape,n_classes, num_nodes)创建模型并使用model(x)是可行的,但是model.predict()model.fit()给我有关.get_weights()的错误。我可以使用渐变色带以急切的模式训练模型,但是速度非常慢。

有没有其他方法可以在每次前向呼叫的训练过程中设置密集层的权重,但可以让我在急切模式之外使用模型?当我说渴望模式时,我的意思是在程序开始时使用tf.enable_eager_execution()

这是我改用model.predict(x)时遇到的错误:

TypeError: len is not well defined for symbolic Tensors. (imprint_net_1/classifier/l2_normalize:0) Please call `x.shape` rather than `len(x)` for shape information.

0 个答案:

没有答案