我尝试使用here概述的numba
jit复杂函数创建自定义损失函数,用于回归算法。它似乎可以作为指标,但是当用作损失时,我会遇到一个奇怪的错误。我有一个玩具函数可以在这里复制问题:
@njit
def test_del(y_true, y_pred):
cols = y_true.shape[1]
out = 0
for i in range(y_true.shape[1]):
true_dam = np.abs(y_true[:, i]).max() #toy
pred_dam = np.abs(y_pred[:, i]).max() #toy
out += np.mean(np.abs(np.log(pred_dam / true_dam))**2)
return out/cols
(是的,我知道这个玩具问题可以优化为更向量化的,但是它遵循了我无法实现的实际功能的结构,因此我将其保留)
然后我有一个损失/指标功能:
def del_loss(y_true, y_pred):
return tf.numpy_function(test_del, [y_true, y_pred], tf.float64) +\
K.cast(tf.keras.losses.mean_squared_error(y_true, y_pred), tf.float64)
现在,如果我使用del_loss
作为度量标准编译模型(只要将其强制转换为float64
,这很奇怪,但无论如何),它就可以正常工作。但是,如果我把它当作损失,我会得到这个奇怪的错误字符串:
Traceback (most recent call last):
#removed my chain of objects resulting in a `model.compile(loss = del_loss)` call
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\engine\training.py", line 229, in compile
self.total_loss = self._prepare_total_loss(masks)
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\engine\training.py", line 692, in _prepare_total_loss
y_true, y_pred, sample_weight=sample_weight)
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\losses.py", line 73, in __call__
losses, sample_weight, reduction=self.reduction)
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\utils\losses_utils.py", line 166, in compute_weighted_loss
losses, None, sample_weight)
File "C:\ProgramData\Anaconda3\envs\MLEnv\lib\site-packages\keras\utils\losses_utils.py", line 76, in squeeze_or_expand_dimensions
elif weights_rank - y_pred_rank == 1:
TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'
现在,如果我尝试追溯到最后一步,我将得到squeeze_or_expand_dimensions
并意识到我处于if
块中,只有当我拥有sample_weight
时,该块才应触发- 。无论如何,它前面的代码是:
y_pred_rank = K.ndim(y_pred)
weights_rank = K.ndim(sample_weight)
if weights_rank != 0:
if y_pred_rank == 0 and weights_rank == 1:
y_pred = K.expand_dims(y_pred, -1)
elif weights_rank - y_pred_rank == 1:
sample_weight = K.squeeze(sample_weight, -1)
elif y_pred_rank - weights_rank == 1:
sample_weight = K.expand_dims(sample_weight, -1)
y_pred_rank
或weights_rank
不应以任何方式结束None
(即使weights
被更早地设置为1
(如它似乎位于compute_weighted_loss
中,weights_rank
应该以0结尾),但显然是。而这与我的新损失函数之间的关系超出了我
答案 0 :(得分:1)
我的不带numba的机器上的这个虚拟示例可以正常工作:
def test_del(y_true, y_pred):
cols = y_true.shape[1]
out = 0
for i in range(y_true.shape[1]):
true_dam = np.abs(y_true[:, i]).max() #toy
pred_dam = np.abs(y_pred[:, i]).max() #toy
out += np.mean(np.abs(np.log(pred_dam / true_dam))**2)
return out/cols
def del_loss(y_true, y_pred):
return tf.numpy_function(test_del, [y_true, y_pred], tf.float64) +\
K.cast(tf.keras.losses.mean_squared_error(y_true, y_pred), tf.float64)
inp = Input((10,))
x = Dense(30)(inp)
out = Dense(10)(x)
model = Model(inp, out)
model.compile('adam', del_loss)
model.fit(np.random.uniform(0,1, (3,10)), np.random.uniform(0,1, (3,10)), epochs=3)