tensorflow / keras“正在向函数构建代码外部的操作传递“图形”张量”

时间:2020-09-05 01:00:19

标签: python tensorflow keras

我是Tensorflow / Keras的新手,并且一直在跟着《使用Scikit-Learn和Tensorflow进行机器学习动手》一书。第12章介绍了定制Tensorflow的过程,随后是相关的笔记本(here),我找到了以下定制模型:

class ReconstructingRegressor(keras.models.Model):
    def __init__(self, output_dim, **kwargs):
        super().__init__(**kwargs)
        self.hidden = [keras.layers.Dense(30, activation="selu",
                                          kernel_initializer="lecun_normal")
                       for _ in range(5)]
        self.out = keras.layers.Dense(output_dim)

    def build(self, batch_input_shape):
        n_inputs = batch_input_shape[-1]
        self.reconstruct = keras.layers.Dense(n_inputs)
        super().build(batch_input_shape)
        
    def call(self, inputs, training=None):
        Z = inputs
        for layer in self.hidden:
            Z = layer(Z)
        reconstruction = self.reconstruct(Z)

        recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))        
        self.add_loss(0.05 * recon_loss)

        return self.out(Z)

当我使用该模型进行训练时,出现以下错误:

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: mul:0

问题是self.add_loss(0.05 * recon_loss);评论完后,一切运行正常。大概recon_loss"Graph" tensor,而self.add_loss()op outside of the function building code,但是-如果适用于add_loss()-我不知道如何添加到call()内部的损失。

全面披露:我在使用Tensorflow 2.3时是考虑到2.1的,所以我并没有真正遵循说明。就是说,我真的很好奇如何解决这个问题,以我目前的知识水平,我觉得自己根本无能为力。它看起来应该像它应该工作一样-还要如何增加损失函数?任何帮助将不胜感激。

完整示例:

import tensorflow as tf
from tensorflow import keras

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
    housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
    X_train_full, y_train_full, random_state=42)

scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)

class ReconstructingRegressor(keras.models.Model):
    def __init__(self, output_dim, **kwargs):
        super().__init__(**kwargs)
        self.hidden = [keras.layers.Dense(30, activation="selu",
                                          kernel_initializer="lecun_normal")
                       for _ in range(5)]
        self.out = keras.layers.Dense(output_dim)

    def build(self, batch_input_shape):
        n_inputs = batch_input_shape[-1]
        self.reconstruct = keras.layers.Dense(n_inputs)
        super().build(batch_input_shape)
        
    def call(self, inputs, training=None):
        Z = inputs
        for layer in self.hidden:
            Z = layer(Z)
        reconstruction = self.reconstruct(Z)
        recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))
        
        self.add_loss(0.05 * recon_loss)

        return self.out(Z)

model = ReconstructingRegressor(1, dynamic=True)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=2)

1 个答案:

答案 0 :(得分:0)

尽管我认为回答这个问题为时已晚...让我向您展示我的尝试。

首先,我删除了自定义模型的构建功能。

    def build(self, batch_input_shape):
        n_inputs = batch_input_shape[-1]
        self.reconstruct = keras.layers.Dense(n_inputs)
        super().build(batch_input_shape)

run_eagerly = True编译时 OR 使用自定义图层计算自定义损失是有效的。 例如,编写自定义图层代码:

class ReconLoss(keras.layers.Layer):
  def __init__(self, **kwargs):
    super().__init__(**kwargs)

  def call(self, inputs):
    x, reconstruction = inputs
    recon_loss = tf.reduce_mean(tf.square(reconstruction - x))

    self.add_loss(0.05 * recon_loss)

    return

然后在自定义模型的__init__中为其分配一个实例,将self.ReconLoss([x, reconstruction])插入自定义模型的调用方法。

编辑的合作代码:https://colab.research.google.com/drive/1Hwi6auz2meKvD0ogdDSywb2E4_J1F9S_?usp=sharing

我仍然不明白为什么会引发错误,但这对我有用。


参考文献: