知觉损失函数没有给出任何梯度张量流

时间:2019-11-22 12:47:46

标签: tensorflow keras

我正在尝试在张量流中实现知觉损失功能,这是

loss_model = tf.keras.models.Sequential()
for eachLayer in base_model.layers[:12]:
    eachLayer.trainable=False
loss_model.add(eachLayer)

def meanSquaredLoss(y_true,y_pred):
    return tf.reduce_mean(tf.keras.losses.MSE(y_true,y_pred))

def featureLoss(image):
    predicted_image = model(image,training=False)
    activatedModelVal = loss_model(predicted_image,training=False)
    actualModelVal = loss_model(image,training=False)
    return meanSquaredLoss(actualModelVal,activatedModelVal)

这是样式赋予函数:

def gram_matrix(input_tensor):
    result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)
    input_shape = tf.shape(input_tensor)
    num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)
    return result/(num_locations)

def styleLoss(image):
    predicted_image = model(image,training=False)
    activatedModelVal = loss_model(predicted_image,training=False)
    actualModelVal = loss_model(image,training=False)
    return meanSquaredLoss(gram_matrix(actualModelVal),gram_matrix(activatedModelVal))

所以现在我既有损失,这也是我为优化和东西所做的!

opt = tf.keras.optimizers.Adam(0.02)

def each_train_step(image,showImage=False):
    predicted_image = model(image,training=False)

    loss = tf.reduce_sum(featureLoss(predicted_image,image)+styleLoss(predicted_image,image))
    with tf.GradientTape() as tape:
        grad = tape.gradient(loss, model.trainable_variables)
        print(grad)
#         opt.apply_gradients(zip(grad, model.trainable_variables))
    if showImage:
        plt.imshow(predicted_image)

问题是grad对象正在获取None的列表,但我不知道为什么!为什么渐变返回None列表?解决方案以获得实际的梯度吗?

0 个答案:

没有答案