Tensorflow,迭代张量

时间:2019-04-18 06:51:33

标签: python tensorflow

我定义了损失函数,我想迭代批次中的每个项目以计算损失函数。我使用了tf.map_fn,但是发现它非常慢。有什么建议吗?

def loss(phi, mu, sigma, t_phi, t_mu, t_sigma):
    _loss = 0.0
    for i in range(phi.shape[0]):
        for j in range(phi.shape[0]):
            _loss += phi[i] * phi[j] * pdf(mu[i], mu[j], tf.sqrt(sigma[i]**2 + sigma[j]**2))
            _loss += t_phi[i] * t_phi[j] * pdf(t_mu[i], t_mu[j], tf.sqrt(t_sigma[i]**2 + t_sigma[j]**2))
            _loss += -2 * phi[i] * t_phi[j] * pdf(mu[i], t_mu[j], tf.sqrt(sigma[i]**2 + t_sigma[j]**2))
    return tf.sqrt(_loss)

def reduce_loss(phi, mu, sigma, t_phi, t_mu, t_sigma):
    with tf.variable_scope('loss') as loss:
        stacked = tf.stack([phi, mu, sigma, t_phi, t_mu, t_sigma], 1)
        return tf.map_fn(lambda x: loss(x[0], x[1], x[2], x[3], x[4], x[5]), stacked,
                         parallel_iterations=4)

def pdf(x, mu, sigma):
    return tf.exp(-0.5*(x-mu)**2/sigma**2) / ((2*np.pi*sigma**2)**0.5)

批处理大小为1024。

1 个答案:

答案 0 :(得分:3)

您可以消除loss函数中的循环。这是通过矢量化所有内容来完成的。例如,您迭代ij来计算phi[i]*phi[j],但这是tf.matmul(phi[:, None], phi[None, :])的第ij个元素。这样做比使用循环实现要快。

此外,由于tensorflow静态地构建图形,因此您的函数甚至可能花费很长的时间来构建图形。因此,通常应避免在Tensorflow中使用大型嵌套for循环。

我已经举例说明了您的损失函数的一部分,将其简化为其他部分。

import tensorflow as tf
from numpy import pi as PI
from time import time


# some random vectors
size = 10
phi = tf.random.uniform([size])
mu = tf.random.uniform([size])
sigma = tf.random.uniform([size])


####################################
# Your original loss
####################################

def pdf(x, m, s):
    return tf.exp(-0.5*(x-m)**2/s**2) / ((2*PI*s**2)**0.5)


def loss():
    _loss = 0.0
    for i in range(phi.shape[0]):
        for j in range(phi.shape[0]):
            _loss += phi[i] * phi[j] * pdf(mu[i], mu[j], tf.sqrt(sigma[i]**2 + sigma[j]**2))
    return tf.sqrt(_loss)


####################################
# vectorised loss
####################################

def vector_pdf(x, s):
    return tf.exp(-0.5*x**2/s**2) / ((2*PI*s**2)**0.5)


def vectorised_loss():
    phi_ij = tf.matmul(phi[:, None], phi[None, :])
    difference = mu[:, None] - mu[None, :]
    sigma_squared = sigma**2
    sigma_sum = tf.sqrt(sigma_squared[:, None] + sigma_squared[None, :])

    loss_array = phi_ij*vector_pdf(difference, sigma_sum)
    return tf.sqrt(tf.reduce_sum(loss_array))


#######################################
# Time the functions and show they are the same
#######################################

with tf.Session() as sess:
    loop_loss = loss()
    vector_loss = vectorised_loss()
    # init = tf.global_variables_initializer()
    # sess.run(init)

    t = 0.
    for _ in range(100):
        st = time()
        loop_loss_val = sess.run(loop_loss)
        t += time() - st
    print('loop took {}'.format(t/100))

    t = 0.
    for _ in range(100):
        st = time()
        vector_val = sess.run(vector_loss)
        t += time() - st
    print('vector took {}'.format(t / 100))

    l_val, v_val = sess.run([loop_loss, vector_loss])
    print(l_val, v_val)

此打印

loop took 0.01740453243255615
vector took 0.004280190467834472
4.6466274 4.6466274

通过对损失函数进行矢量化,您的reduce函数也应易于向量化。现在,您将要批处理mulmul,并稍微更改相减的索引。例如:

mu[:, None] - mu[None, :]
# becomes
mu[: ,:, None] - mu[:, None, :]