简介
我想为Keras实现自定义丢失功能。我想这样做,因为我对我的数据集的当前结果不满意。我认为这是因为目前内置的损失函数集中在整个数据集上。我只想关注数据集中的最高值。这就是为什么我想出了一个自定义损失函数的想法:
自定义亏损功能理念
自定义损失函数应采用具有最高值的前4个预测,并使用相应的真值减去它。然后从该减法中取绝对值,将其乘以一些权重并将其加到总损失总和中。
为了更好地理解这种自定义丢失功能,我使用列表输入对其进行了编程。希望这个例子更容易理解:
以下示例计算损失= 4 * abs(0.7-0.5)+ 3 * abs(0.5-0.7)+ 2 * abs(0.4-0.45)+ 1 * abs(0.4-0.3)= 1.6 i = 0
然后将它除以div_top,在这个例子中为10(对于i = 0,它将是0.16),重复所有其他i的所有内容,最后取所有样本的平均值。
top = 4
div_top = 0.5*top*(top+1)
def own_loss(y_true, y_pred):
loss_per_sample = [0]*len(y_pred)
for i in range(len(y_pred)):
sorted_pred, sorted_true = (list(t) for t in zip(*sorted(zip(y_pred[i], y_true[i]))))
for k in range(top):
loss_per_sample[i] += (top-k)*abs(sorted_pred[-1-k]-sorted_true[-1-k])
loss_per_sample = [t/div_top for t in loss_per_sample]
return sum(loss_per_sample)/len(loss_per_sample)
y_pred = [[0.1, 0.4, 0.7, 0.4, 0.4, 0.5, 0.3, 0.2],
[0.3, 0.8, 0.5, 0.3, 0.1, 0.0, 0.1, 0.5],
[0.5, 0.6, 0.6, 0.8, 0.3, 0.6, 0.7, 0.1]]
y_true = [[0.2, 0.45, 0.5, 0.3, 0.4, 0.7, 0.22, 0.1],
[0.4, 0.9, 0.3, 0.0, 0.2, 0.1, 0.11, 0.8],
[0.4, 0.7, 0.4, 0.3, 0.4, 0.7, 0.6, 0.05]]
print(own_loss(y_true, y_pred)) # Output is 0.196667
对Keras的实施
我想在Keras中使用此功能作为自定义丢失功能。这看起来像这样:
import numpy as np
from keras.datasets import boston_housing
from keras.layers import LSTM
from keras.models import Sequential
from keras.optimizers import RMSprop
(pre_x_train, pre_y_train), (x_test, y_test) = boston_housing.load_data()
"""
The following 8 lines are to format the dataset to a 3D numpy array
4*101*13. I do this so that it matches my real dataset with is formatted
to a 3D numpy array 47*731*179. It is not important to understand the following
8 lines for the loss function itself.
"""
x_train = [[0]*101]*4
y_train = [[0]*101]*4
for i in range(4):
for k in range(101):
x_train[i][k] = pre_x_train[i*101+k]
y_train[i][k] = pre_y_train[i*101+k]
train_x = np.array([np.array([np.array(k) for k in i]) for i in x_train])
train_y = np.array([np.array([np.array(k) for k in i]) for i in y_train])
top = 4
div_top = 0.5*top*(top+1)
def own_loss(y_true, y_pred):
loss_per_sample = [0]*len(y_pred)
for i in range(len(y_pred)):
sorted_pred, sorted_true = (list(t) for t in zip(*sorted(zip(y_pred[i], y_true[i]))))
for k in range(top):
loss_per_sample[i] += (top-k)*abs(sorted_pred[-1-k]-sorted_true[-1-k])
loss_per_sample = [t/div_top for t in loss_per_sample]
return sum(loss_per_sample)/len(loss_per_sample)
model = Sequential()
model.add(LSTM(units=64, batch_input_shape=(None, 101, 13), return_sequences=True))
model.add(LSTM(units=101, return_sequences=False, activation='linear'))
# compile works with loss='mean_absolute_error' but not with loss=own_loss
model.compile(loss=own_loss, optimizer=RMSprop())
model.fit(train_x, train_y, epochs=16, verbose=2, batch_size=1, validation_split=None, shuffle=False)
显然,上面的Keras例子不会起作用。但我也不知道如何才能开始工作。
解决问题的方法
我阅读了以下文章,试图解决问题:
How to use a custom objective function for a model?
我还阅读了Keras后端页面:
和Tensorflow Top_k页面:
对我而言,这似乎是最有前途的方法,但在实现它之后,许多不同的方法仍然无效。在使用top_k进行排序时,我可以获得正确的pred_y值,但是我无法获得相应的true_y值。
有人知道如何实现自定义丢失功能吗?
答案 0 :(得分:0)
tf.nn.top_k
对张量进行排序。这意味着“如果两个元素相等,则首先显示较低索引元素”,如API document中所述。top = 4
div_top = 0.5*top*(top+1)
def getitems_by_indices(values, indices):
return tf.map_fn(
lambda x: tf.gather(x[0], x[1]), (values, indices), dtype=values.dtype
)
def own_loss(y_true, y_pred):
y_pred_top_k, y_pred_ind_k = tf.nn.top_k(y_pred, top)
y_true_top_k = getitems_by_indices(y_true, y_pred_ind_k)
loss_per_sample = tf.reduce_mean(
tf.reduce_sum(
tf.abs(y_pred_top_k - y_true_top_k) *
tf.range(top, 0, delta=-1, dtype=y_pred.dtype),
axis=-1
) / div_top
)
return loss_per_sample
model = Sequential()
model.add(LSTM(units=64, batch_input_shape=(None, 101, 13), return_sequences=True))
model.add(LSTM(units=101, return_sequences=False, activation='linear'))
# compile works with loss='mean_absolute_error' but not with loss=own_loss
model.compile(loss=own_loss, optimizer=RMSprop())
model.train_on_batch(train_x, train_y)
getitems_by_indices()
实施?getitems_by_indices()
的实施使用了Sungwoon Kim的想法。