我正在创建一个自定义损失函数-在此函数之前,我已经做了其他工作,可以很好地工作。但是,我在渐变上遇到错误:
LookupError:未为操作“ loss / target_global_pool_loss / while / RandomShuffle”(操作类型:RandomShuffle)定义梯度
我不确定这是否是我在tensorflow while循环中的处理方式,但是,如果我打开python终端,我确实会得到一个浮点值:
import tensorflow as tf
import warp_loss
a = [0,1,0,1,1,1,0,0,1]
b = [0.5,0.5,0.3,0.7,0.8,0.9,0.,0.2,0.2]
a = tf.constant(a)
b = tf.constant(b)
sess = tf.InteractiveSession()
loss = warp_loss(a,b)
loss.eval()
0.41588834
loss
<tf.Tensor 'while_3/Exit_1:0' shape=() dtype=float32>
def warp_loss(y_true, y_pred):
"""
Implementation of the WARP loss function
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- prediction values 0-1.
Returns:
loss -- real number, value of the loss
"""
neg_mask = tf.where(tf.equal(y_true, 0), tf.ones_like(y_pred), tf.zeros_like(y_pred))
# Get positive and negative scores
positives = tf.boolean_mask(y_pred,y_true)
negatives = tf.boolean_mask(y_pred,neg_mask)
loss = tf.constant(0, dtype=tf.float32)
p = tf.constant(0)
# Loop all positives
while_condition = lambda p, loss: tf.less(p, tf.shape(positives)[0])
def sampling(p, loss):
# Simulate random sampling without resampling
shuffled = tf.random.shuffle(negatives)
# If no negative above positive, low loss
sample_i = tf.cond( tf.keras.backend.sum(K.cast(K.greater(shuffled, positives[p]), K.floatx())) > 0, lambda: tf.cast(tf.argmax(K.cast(K.greater(shuffled, positives[p]), K.floatx())), tf.float32) , lambda: tf.cast(-1, tf.float32 ) )
# Every positive is equally wanted (therefore -1 foregoes to the investigated positive class)
L = tf.log(tf.cast(tf.shape(negatives)[0],tf.float32)/(sample_i+1.))
distance = tf.cast(shuffled[tf.cast(sample_i,tf.int32)], tf.float32)-tf.cast(positives[p], tf.float32)
# Sum up loss
individual_loss = tf.cond( sample_i >= 0 , lambda: L*distance , lambda: tf.cast(0, tf.float32 ) )
return [tf.add(p, 1), tf.add(loss, individual_loss)]
_, loss = tf.while_loop(while_condition, sampling, [p, loss])
return loss
我希望我的输出应该像其他损失函数一样为浮点值。
我的输入是i,j,channels,输出是潜在类别的二进制列表。我每批处理train_on_batch 1个样本(此处失败):
File "train.py", line 319, in <module>
batch_out = model.train_on_batch(np.array([npzobj['features']]), np.array([npzobj['targets']]))
File "/lib/python3.5/site-packages/keras/engine/training.py", line 1216, in train_on_batch
self._make_train_function()
File "/lib/python3.5/site-packages/keras/engine/training.py", line 509, in _make_train_function
loss=self.total_loss)
File "/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/lib/python3.5/site-packages/keras/optimizers.py", line 184, in get_updates
grads = self.get_gradients(loss, params)
File "/lib/python3.5/site-packages/keras/optimizers.py", line 89, in get_gradients
grads = K.gradients(loss, params)
File "/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2757, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 664, in gradients
unconnected_gradients)
File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 923, in _GradientsHelper
(op.name, op.type))
LookupError: No gradient defined for operation 'loss/target_global_pool_loss/while/RandomShuffle' (op type: RandomShuffle)
答案 0 :(得分:0)
显然随机混洗没有梯度,但是,遵循此解决方案GPU kernel for tf.random_shuffle的一项工作解决了我的问题。
shuffled = tf.gather(negatives, tf.random.shuffle(tf.range(tf.shape(negatives)[0])))
# Instead of
shuffled = tf.random.shuffle(negatives)