Tensorflow与Numpy性能

时间:2017-03-09 18:13:35

标签: performance numpy tensorflow

我正在计算numpy中的平均值和标准差。为了提高性能,我在Tensorflow中尝试了同样的功能,但Tensorflow至少慢了10倍。我在Tensorflow中尝试了两种方法(下面的代码)。第一种方法使用tf.nn.moments(),它有一个错误,导致它有时返回方差的负值。在第二种方法中,我通过其他Tensorflow函数计算方差。

我尝试过仅使用CPU和GPU; numpy总是更快。

我使用time.time()而不是time.clock()来衡量使用GPU时的挂钟时间。

为什么Tensorflow会变慢?我认为这可能是由于将数据传输到GPU中,但即使对于非常小的数据集(传输时间应该可以忽略不计),以及仅使用CPU时,TF也会更慢。这是由于初始化TF所需的开销时间吗?

import tensorflow as tf
import numpy
import time
import math

class Timer:
    def __enter__(self):
        self.start = time.time()
        return self

    def __exit__(self, *args):
        self.end = time.time()
        self.interval = self.end - self.start

inData = numpy.random.uniform(low=-1, high=1, size=(40000000,))

with Timer() as t:
    mean = numpy.mean(inData)
print 'python mean', mean, 'time', t.interval

with Timer() as t:
    stdev = numpy.std(inData)
print 'python stdev', stdev, 'time', t.interval

# Approach 1 (Note tf.nn.moments() has a bug)
with Timer() as t:
    with tf.Graph().as_default():
        meanTF, varianceTF = tf.nn.moments(tf.constant(inData), axes=[0])
        init_op = tf.global_variables_initializer()
        with tf.Session() as sess:
            sess.run(init_op)
            mean, variance = sess.run([meanTF, varianceTF])
            sess.close()
print 'variance', variance
stdev = math.sqrt(variance)
print 'tensorflow mean', mean, 'stdev', stdev, 'time', t.interval

# Approach 2
with Timer() as t:
    with tf.Graph().as_default():
        inputVector = tf.constant(inData)
        meanTF = tf.reduce_mean(inputVector)
        length = tf.size(inputVector)
        varianceTF = tf.divide(tf.reduce_sum(tf.squared_difference(inputVector, mean)), tf.to_double(length))
        init_op = tf.global_variables_initializer()
        with tf.Session() as sess:
            sess.run(init_op)
            mean, variance = sess.run([meanTF, varianceTF])
            sess.close()
print 'variance', variance
stdev = math.sqrt(variance)
print 'tensorflow mean', mean, 'stdev', stdev, 'time', t.interval

2 个答案:

答案 0 :(得分:2)

以下是一个略好的基准。在Xeon V3上进行了测试,使用TensorFlow CPU编译了所有优化选项+来自here的XLA与最新anaconda附带的numpy MKL。

XLA可能在这里没有什么不同,但留给了后人。

注意:

  1. 从计时中排除前几次运行,它们可以包括初始化/分析

  2. 使用变量来避免将输入复制到Tensorflow运行时。

  3. 在调用之间调整变量以确保没有缓存

  4. 结果:

       numpy 23.5 ms, 25.7 ms
          tf 14.7 ms, 20.5 ms
    

    代码:

    import numpy as np
    import tensorflow as tf
    import time
    from tensorflow.contrib.compiler import jit
    jit_scope = jit.experimental_jit_scope
    
    inData = np.random.uniform(low=-1, high=1, size=(40000000,)).astype(np.float32)
    #inDataFeed = tf.placeholder(inData.dtype)
    
    with jit_scope(compile_ops=True):
        inDataVar = tf.Variable(inData)
        meanTF = tf.reduce_mean(inDataVar)
    
    
    sess = tf.Session()
    times = []
    sess.run(tf.global_variables_initializer())
    num_tries = 10
    
    
    times = []
    for i in range(num_tries):
        t0 = time.perf_counter()
        mean = np.mean(inData)
        times.append(time.perf_counter()-t0)
    
    print("%10s %.1f ms, %.1f ms" %("numpy", 10**3*min(times),
                                    10**3*np.median(times)))
    
    times = []
    perturb = inDataVar.assign_add(tf.random_uniform(inData.shape))
    for i in range(num_tries):
        sess.run(perturb)
        t0 = time.perf_counter()
        mean, = sess.run([meanTF])
        times.append(time.perf_counter()-t0)
    
    times = times[2:] # discard first few because they could include profiling runs
    print("%10s %.1f ms, %.1f ms" %("tf", 10**3*min(times),
                                    10**3*np.median(times)))
    

答案 1 :(得分:1)

这是一个claims that TF mean is significantly faster than in numpy or theano的人的基准。基准为here,并在

上进行了测试
  

Intel核心i5-4460 CPU,配备16GiB RAM和Nvidia GTX 970,带4个   在Linux Mint上使用Theano 0.8.2,Tensorflow 0.11.0,CUDA 8.0的GiB RAM   18

enter image description here

以下是some other benchmarks,但它们并未涉及平均值。