来自TF 1.13的Tensorflow 2.0 Alpha中的自定义损失函数
我正在尝试在TF 2.0的model.compile()
中使用此library的roc_auc损失函数。尽管我的实现没有出错,但损失和准确性却没有改变。
我首先使用Google建议的代码将1.0 TF代码转换为2.0。
然后我从库中导入函数并以以下方式使用:
model.compile(optimizer='adam',
loss=roc_auc_loss,
metrics=['accuracy',acc0, acc1, acc2, acc3, acc4])
Epoch 17/100
100/100 [==============================] - 20s 197ms/step - loss: 469.7043 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00 - val_loss: 152.2152 - val_accuracy: 0.0000e+00 - val_acc0: 0.0000e+00 - val_acc1: 0.0000e+00 - val_acc2: 0.0000e+00 - val_acc3: 0.0000e+00 - val_acc4: 0.0000e+00
Epoch 18/100
100/100 [==============================] - 20s 198ms/step - loss: 472.0472 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00 - val_loss: 152.2152 - val_accuracy: 0.0000e+00 - val_acc0: 0.0000e+00 - val_acc1: 0.0000e+00 - val_acc2: 0.0000e+00 - val_acc3: 0.0000e+00 - val_acc4: 0.0000e+00
Epoch 19/100
78/100 [======================>.......] - ETA: 4s - loss: 467.4657 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00
我想了解TF 2.0中Keras的问题,因为它显然没有向后传播。谢谢。
答案 0 :(得分:-1)
@ruben可以共享一个独立的代码来重现该问题吗?我认为我们需要检查函数定义。您是否在函数定义之上添加了@ tf.function()?谢谢!
请检查以下示例(来自TF网站的简单示例)
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
import keras.backend as K
# load mnist data
mnist=tf.keras.datasets.mnist
(x_train,y_train),(x_test,y_test)=mnist.load_data()
x_train,x_test=x_train/255.0,x_test/255.0
# Custom Loss1 (for example)
@tf.function()
def customLoss1(yTrue,yPred):
return tf.reduce_mean(yTrue-yPred)
# Custom Loss2 (for example)
@tf.function()
def customLoss2(yTrue, yPred):
return tf.reduce_mean(tf.square(tf.subtract(yTrue,yPred)))
model=tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10,activation='softmax')
])
# Compile the model with custom loss functions
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy', customLoss1, customLoss2])
# Fit and evaluate model
model.fit(x_train,y_train,epochs=5)
model.evaluate(x_test,y_test)
警告:在标志解析之前记录到stderr。 W0711 23:57:16.453042 139687207184256 deprecation.py:323]来自/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250:add_dispatch_support..wrapper(来自tensorflow.python .ops.array_ops)已弃用,并将在以后的版本中删除。 更新说明: 在2.0中使用tf.where,其广播规则与np.where相同 训练60000个样本 时代1/5 60000/60000 [==============================]-5s 87us / sample-损耗:0.2983-精度:0.9133-customLoss1 :4.3539-customLoss2:27.3769 时代2/5 60000/60000 [==============================]-5s 83us / sample-损耗:0.1456-精度:0.9555-customLoss1 :4.3539-customLoss2:27.3860 时代3/5 60000/60000 [==============================]-5s 82us / sample-损耗:0.1095-精度:0.9663-customLoss1 :4.3539-customLoss2:27.3881 时代4/5 60000/60000 [==============================]-5s 83us / sample-损耗:0.0891-精度:0.9717-customLoss1 :4.3539-customLoss2:27.3893 时代5/5 60000/60000 [==============================]-5s 87us / sample-损耗:0.0745-精度:0.9765-customLoss1 :4.3539-customLoss2:27.3901 10000/10000 [==============================]-0s 46us / sample-损耗:0.0764-精度:0.9775-customLoss1 :4.3429-customLoss2:27.3301 [0.07644735965565778,0.9775,4.342905,27.330126]