如何通过张量流实现成对损失函数?

时间:2017-12-24 00:52:08

标签: tensorflow machine-learning deep-learning loss-function

我正在通过tensorflow实现定制的成对损失函数。举一个简单的例子,训练数据有5个实例,其标签是

y=[0,1,0,0,0]

假设预测是

y'=[y0',y1',y2',y3',y4']

在这种情况下,可以使用简单的损失函数

min f=(y0'-y1')+(y2'-y1')+(y3'-y1')+(y4'-y1')

y[1]=1以来。我只是想确保预测y0',y2',y3',y4'为"远"为y1'

但是,我不知道如何在tensorflow中实现它。在我目前的实现中,我使用迷你批处理并将训练标签设置为占位符,如: y = tf.placeholder("float", [None, 1])。在这种情况下,我无法构建损失函数,因为我不知道训练数据的大小以及哪个实例有标签" 1"或" 0"由于"无"。

有谁能建议如何在tensorflow中做到这一点?谢谢!

2 个答案:

答案 0 :(得分:2)

您可以预处理数据模型外

例如:

首先将正面和负面实例分成两组输入:

# data.py

import random

def load_data(data_x, data_y):
    """
    data_x: list of all instances
    data_y: list of their labels
    """
    pos_x = []
    neg_x = []
    for x, y in zip(data_x, data_y):
        if y == 1:
            pos_x.append(x)
        else:
            neg_x.append(x)

    ret_pos_x = []
    ret_neg_x = []

    # randomly sample k negative instances for each positive one
    for x0 in pos_x:
        for x1 in random.sample(neg_x, k):
            ret_pos_x.append(x0)
            ret_neg_x.append(x1)

    return ret_pos_x, ret_neg_x

接下来,在您的模型中,定义2个占位符,而不是1:

# model.py

import tensorflow as tf

class Model:
    def __init__(self):
        # shape: [batch_size, dim_x] (assume x are vectors of dim_x)
        self.pos_x = tf.placeholder(tf.float32, [None, dim_x])  
        self.neg_x = tf.placeholder(tf.float32, [None, dim_x])

        # shape: [batch_size]
        # NOTE: variables in some_func should be reused
        self.pos_y = some_func(self.pos_x)
        self.neg_y = some_func(self.neg_x)

        # A more generalized form: loss = max(0, margin - y+ + y-)
        self.loss = tf.reduce_mean(tf.maximum(0.0, 1.0 - self.pos_y + self.neg_y))
        self.train_op = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)

最后遍历您的数据以提供模型:

# main.py

import tensorflow as tf 

from model import Model
from data import load_data

data_x, data_y = ...  # read from your file
pos_x, neg_x = load_data(data_x, data_y)

model = Model()
with tf.Session() as sess:
    # TODO: randomize the order
    for beg in range(0, len(pos_x), batch_size):
        end = min(beg + batch_size, len(pos_x))

        feed_dict = {
            model.pos_x: pos_x[beg:end],
            model.neg_x: neg_x[beg:end]
        }
        _, loss = sess.run([model.train_op, model.loss], feed_dict)
        print "%s/%s, loss = %s" % (beg, len(pos_x), loss)

答案 1 :(得分:0)

假设我们有标签,例如y=[0,1,0,0,0]

将其转换为Y=[-1,1,-1,-1,-1]

预测为y'=[y0',y1',y2',y3',y4']

所以,目标是min f = -mean(Y*y')

请注意,上述公式相当于您的陈述。