Tensorflow中的WARP损失

时间:2019-02-01 18:33:29

标签: tensorflow

我注意到有尝试在Keras中实现WARP丢失的尝试,例如(Implimentation of WARP loss in Keras),但是我还没有看到任何githubs或tensorflow版本的WARP丢失的出版物。我看到了从哪里开始执行该算法。

当前实现为:

def warp_loss(y, yhat):
    # y: (10,1)
    # yhat: (10, 1)
    # for all positives randomly sample until we find yhat_pos < yhat_neg
    max_tries = 9
    y = tf.squeeze(y)
    y = tf.Print(y, [y], summarize=-1)
    yhat = tf.squeeze(yhat)
    positive = tf.zeros_like(yhat)
    negative = tf.zeros_like(yhat)
    # Gather Zero Indicies
    zero = tf.constant(0, dtype=tf.float32)
    where = tf.not_equal(y, zero)
    one_ind = tf.where(where)
    #Gather 1 Indicies
    where = tf.equal(y, zero)
    zero_ind = tf.where(where)
    one_ind = tf.squeeze(one_ind, -1)
    zero_ind = tf.squeeze(zero_ind, -1)
    one_ind = tf.Print(one_ind, [one_ind], summarize=-1)
    time_steps = tf.shape(y)[0]
    searches = tf.constant([1], shape=())
    # Loop for random sample
    def condition(x):
        x = tf.add(x, 1)
        return x <= time_steps

    def body(x):
        # Sample and compare
        r_pos = tf.reshape(tf.py_func(lambda x: np.random.choice(x,1),[one_ind], tf.int32),())
        r_neg = tf.reshape(tf.py_func(lambda x: np.random.choice(x,1),[zero_ind], tf.int32),())
        res = tf.cond(tf.less(yhat[r_pos],yhat[r_neg]), lambda: tf.multiply(tf.subtract(yhat[r_neg], yhat[r_pos]), tf.cast(tf.log(tf.divide(x, tf.constant([max_tries]))),tf.float32)), lambda: tf.constant(0, dtype=tf.float32))
        return tf.reshape(res, ())
    #N = searches
    #L = np.log(9)/N
    #total_loss = L * difference
    res = tf.while_loop(condition, body, [searches])
    return tf.cast(res, tf.float32)#tf.reduce_sum(tf.add(tf.reduce_sum(tf.cast(one_ind, tf.float32)), yhat))#tf.cast(tf.reduce_sum(one_ind), tf.float32)

但是会引发以下错误:

  

ValueError:操作具有None用于渐变。请确定   您所有的操作都定义了渐变(即   可区分的)。不带渐变的常见操作:K.argmax,K.round,   埃瓦尔。

0 个答案:

没有答案