具有多个输出和自定义损失函数的模型

时间:2020-04-17 20:31:59

标签: python tensorflow keras

我正在尝试使用keras训练具有多个输出和自定义损失函数的模型,但是出现一些错误tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over ``tf.Tensor`` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

调试它很困难,因为我正在做model.compilemodel.fit。我认为这与在具有多个输出时应该如何定义模型有关,但是我找不到关于此的好的文档。该指南指定了如何使用功能性API来创建具有多个输出的模型,并为此提供了一个示例,但是并未阐明自定义损失函数在子类Model API中应如何工作。我的代码如下:

class DeepEnsembles(Model):

    def __init__(self, **kwargs):
        super(DeepEnsembles, self).__init__()

        self.num_models = kwargs.get('num_models')
        model = kwargs.get('model')

        self.mean = [model(**dict(**kwargs)) for _ in range(self.num_models)]

        self.variance = [model(**dict(**kwargs)) for _ in range(self.num_models)]

    def call(self, inputs, training=None, mask=None):
        mean_predictions = []
        variance_predictions = []
        for idx in range(self.num_models):
            mean_predictions.append(self.mean[idx](inputs, training=training))
            variance_predictions.append(self.variance[idx](inputs, training=training))
        mean_stack = tf.stack(mean_predictions)
        variance_stack = tf.stack(variance_predictions)

        return mean_stack, variance_stack

以下是MLP:

class MLP(Model):
    def __init__(self, **kwargs):
        super(MLP, self).__init__()

        # Initialization parameters
        self.num_inputs = kwargs.get('num_inputs', 779)
        self.num_outputs = kwargs.get('num_outputs', 1)
        self.hidden_size = kwargs.get('hidden_size', 256)
        self.activation = kwargs.get('activation', 'relu')

        # Optional parameters
        self.p = kwargs.get('p', 0.05)

        self.model = tf.keras.Sequential([
            layers.Dense(self.hidden_size, activation=self.activation, input_shape=(self.num_inputs,)),
            layers.Dropout(self.p),
            layers.Dense(self.hidden_size, activation=self.activation),
            layers.Dropout(self.p),
            layers.Dense(self.num_outputs)
         ])

    def call(self, inputs, training=None, mask=None):
        output = self.model(inputs, training=training)
        return output

我正在尝试最小化自定义损失函数

class GaussianNLL(Loss):

    def __init__(self):
        super(GaussianNLL, self).__init__()

    def call(self, y_true, y_pred):

        mean, variance = y_pred
        variance = variance + 0.0001
        nll = (tf.math.log(variance) / 2 + ((y_true - mean) ** 2) / (2 * variance))
        nll = tf.math.reduce_mean(nll)
        return nll

最后,这就是我尝试训练的方法:

    ensembles_params = {'num_models': 5, 'model': MLP, 'p': 0}
    model = DeepEnsembles(**ensembles_params)
    loss_fn = GaussianNLL()
    optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
    epochs = 10000

    model.compile(optimizer='adam',
                  loss=loss_fn,
                  metrics=['mse', 'mae'])
    history = model.fit(x_train, y_train,
                        batch_size=2048,
                        epochs=10000,
                        verbose=0,
                        validation_data=(x_val, y_val))

这将导致上述错误。有指针吗?特别是,整个堆栈跟踪为

Traceback (most recent call last):
  File "/home/emilio/anaconda3/lib/python3.7/contextlib.py", line 130, in __exit__
    self.gen.throw(type, value, traceback)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variable_scope.py", line 2803, in variable_creator_scope
    yield
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
    use_multiprocessing=use_multiprocessing)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
    use_multiprocessing=use_multiprocessing)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 646, in _process_inputs
    x, y, sample_weight=sample_weights)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
    self._compile_from_inputs(all_inputs, y_input, x, y)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2618, in _compile_from_inputs
    experimental_run_tf_function=self._experimental_run_tf_function)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 446, in compile
    self._compile_weights_loss_and_weighted_metrics()
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1592, in _compile_weights_loss_and_weighted_metrics
    self.total_loss = self._prepare_total_loss(masks)
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1652, in _prepare_total_loss
    per_sample_losses = loss_fn.call(y_true, y_pred)
  File "/home/emilio/fault_detection/tensorflow_code/tf_utils/loss.py", line 13, in call
    mean, variance = y_pred
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 539, in __iter__
    self._disallow_iteration()
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 535, in _disallow_iteration
    self._disallow_in_graph_mode("iterating over `tf.Tensor`")
  File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 515, in _disallow_in_graph_mode
    " this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.


因此,它显然与损失函数有关。但是模型的前向通过输出一个元组,我在损失函数中解压缩了该元组,所以我不知道为什么这是个问题。

1 个答案:

答案 0 :(得分:1)

通过快速测试,我认为我通过替换来解决了这个问题:

        mean, variance = y_pred
        variance = variance + 0.0001

使用

        mean = y_pred[0]
        variance = y_pred[1] + 0.0001

解压缩y_pred(是张量)会调用方法Tensor.__iter__,这显然会产生错误,而我想方法Tensor.__getitem__不会...

当我开始学习时,我还没说到重点,我认为我目前的假人x_train和y_train的形状并不正确。如果您发现此问题稍后再次发生,我将尝试进行调查。

编辑:

我设法使用来使您的代码运行

x_train = np.random.random((10000, 779))
y_train = np.random.random ((10000, 1))

通过用

更改方法DeepEnsembles.call的最后一行
        return tf.stack([mean_stack, variance_stack])

并注释掉度量标准(由于y_true和y_pred的大小预期不同,所以有必要,因此您可能需要定义自己的mse和mae版本以用作度量标准):

model.compile(optimizer='adam',
              loss=loss_fn,
              # metrics=['mse', 'mae']
)

我相信这与您的期望非常接近。

不返回元组的原因是,张量流会将元组的每个元素解释为网络的输出,并将损失独立应用于每个元组。

您可以通过保留旧版本的DeepEnsembles.call来进行测试,而使用

y_train_1 = np.random.random ((10000, 1))
y_train_2 = np.random.random ((10000, 1))
y_train = [y_train_1, y_train_2]

它将执行,将有10个MLP,但是MLP_1 / 2将学习y_train_1的均值和方差,MLP_6 / 7将学习y_train_2的均值和var,所有其他MLP将不学习任何内容。