Keras:自定义损失函数中特定输入值的一阶和二阶导数

时间:2020-10-16 16:34:53

标签: python tensorflow keras deep-learning

我正在尝试估算PDE的解决方案,为此,我需要针对我的训练数据批中的特定输入值计算一阶和二阶导数。

我定义了一个自定义损失函数:

def loss_tensor(f_init, y_pred,x_train):
    l1 = K.square(4*K.gradients(y_pred[3], x_train[3,1]) - K.gradients(K.gradients(y_pred[3], x_train[3,0]), x_train[3,0]))
    l2 = K.square(y_pred[0])
    l3 = K.square(y_pred[1])
    l4 = K.square(y_pred[5] - f_init[5])

    return l1+l2+l3+l4

def loss_func(x_train):
        def loss(f_init,y_pred):
            return loss_tensor(f_init,y_pred,x_train)
        return loss

我将NN定义为:

model = keras.Sequential()
model.add(keras.Input(shape=(2)))
model.add(layers.Dense(16, activation="relu"))
model.add(layers.Dense(8, activation="relu"))
model.add(layers.Dense(1, activation=None))

model.summary()

当我跑步

opt = keras.optimizers.SGD(learning_rate=0.1)
model_loss = loss_func(x_train=X_train)
model.compile(optimizer=opt, loss=model_loss)
history = model.fit(x=X_train, y=f_init, shuffle=False, epochs=5)

我收到一个属性错误提示

AttributeError:用户代码中:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function> * 返回step_function(自己,迭代器) :11损失* 返回_loss_tensor(f_init,y_pred,x_train) :2 _tensor_tensor * l1 = K.square(4 * K.gradients(y_pred [3],x_train [3,1])-K.gradients(K.gradients(y_pred [3],x_train [3,0]),x_train [3, 0])) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:3969渐变** 损失,变量,colocate_gradients_with_ops = True) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_impl.py:172渐变 unconnected_gradients) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:536 _GradientsHelper 梯度uid) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_util.py:167 _DefaultGradYs 与_maybe_colocate_with(y.op,gradient_uid,colocate_gradients_with_ops):

AttributeError:'NoneType'对象没有属性'op'

只要有需要,我还将展示X_train的外观:

X_train

并且f_init是一个(6,1)数组,所有值都等于2.414(我猜想它的维数必须等于y_pred的维数,否则这只是我需要在第14部分中使用的一个值)函数的损失)

0 个答案:

没有答案