我正在尝试最大化接近真实值的预测数量,即使这会导致疯狂的离群值,否则可能会使中位数(我已经有工作亏损)或均值产生偏差。
因此,我尝试使用此自定义损失函数:
def lossMetricPercentGreaterThanTenPercentError(y_true, y_pred):
"""
CURRENTLY DOESN'T WORK AS LOSS: NOT DIFFERENTIABLE
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
See https://keras.io/losses/
"""
from keras import backend as K
import tensorflow as tf
diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true), K.epsilon(), None))
withinTenPct = tf.reduce_sum(tf.cast(K.less_equal(diff, 0.1), tf.int32), axis= -1) / tf.size(diff, out_type= tf.int32)
return 100 * (1 - tf.cast(withinTenPct, tf.float32))
我知道至少less_equal
函数是不可区分的(我不确定它是否也适合tf.size
);是否有一些张量运算可以近似“小于或等于”?
我使用的是Tensorflow 1.12.3,无法升级,因此即使tf.numpy_function(lambda x: np.sum(x <= 0.1) / len(x), diff, tf.float32)
可以用作包装器,我也不能使用tf.numpy_function
。
答案 0 :(得分:0)
从错误消息看来,Keras中尚未实现某些渐变操作。
您可以尝试使用Tensorflow操作获得相同的结果(未经测试!):
diff = tf.abs(y_true - y_pred) / tf.clip_by_value(tf.abs(y_true), 1e-12, 1e12))
withinTenPct = tf.reduce_mean(tf.cast(tf.less_equal(diff, tf.constant(0.1, dtype=tf.float32, shape=tf.shape(diff)), tf.int32)))
return 100.0 * (1.0 - tf.cast(withinTenPct, tf.float32))
或者,您也可以尝试tf.keras.losses.logcosh(y_true, y_pred)
。
似乎适合您的用例。参见Tf Doc