Tensorflow GPU利用率仅为60%(GTX 1070)

时间:2016-10-23 09:50:09

标签: performance machine-learning neural-network tensorflow nvidia

我正在训练具有张量流的CNN模型。我只实现了60%(+ - 2-3%)的GPU利用率而没有大跌。

Sun Oct 23 11:34:26 2016       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0     Off |                  N/A |
|  1%   53C    P2    90W / 170W |   7823MiB /  8113MiB |     60%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      3644    C   /usr/bin/python2.7                            7821MiB |
+-----------------------------------------------------------------------------+

因为它是Pascal卡,我使用CUDA 8和cudnn 5.1.5 CPU使用率约为50%(均匀分布在8个线程上.i7 4770k),因此CPU性能不应成为瓶颈。

我使用Tensorflow的二进制文件格式,并使用tf.TFRecordReader()

阅读

我正在创建这样的批量图像:

#Uses tf.TFRecordReader() to read single Example
label, image = read_and_decode_single_example(filename_queue=filename_queue) 
image = tf.image.decode_jpeg(image.values[0], channels=3)
jpeg = tf.cast(image, tf.float32) / 255.
jpeg.set_shape([66,200,3])
images_batch, labels_batch = tf.train.shuffle_batch(
    [jpeg, label], batch_size= FLAGS.batch_size,
    num_threads=8,
    capacity=2000, #tried bigger values here, does not change the performance
    min_after_dequeue=1000) #here too

这是我的训练循环:

sess = tf.Session()

sess.run(init)
tf.train.start_queue_runners(sess=sess)
for step in xrange(FLAGS.max_steps):
    labels, images = sess.run([labels_batch, images_batch])
    feed_dict = {images_placeholder: images, labels_placeholder: labels}
    _, loss_value = sess.run([train_op, loss],
                                 feed_dict=feed_dict)

我对tensorflow没有多少经验,而且我现在还没有出现瓶颈的地方。如果您需要更多代码段来帮助确定问题,我会提供它们。

更新:带宽测试结果

==5172== NVPROF is profiling process 5172, command: ./bandwidthtest

Device: GeForce GTX 1070
Transfer size (MB): 3960

Pageable transfers
  Host to Device bandwidth (GB/s): 7.066359
  Device to Host bandwidth (GB/s): 6.850315

Pinned transfers
  Host to Device bandwidth (GB/s): 12.038037
  Device to Host bandwidth (GB/s): 12.683915

==5172== Profiling application: ./bandwidthtest
==5172== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 50.03%  933.34ms         2  466.67ms  327.33ms  606.01ms  [CUDA memcpy DtoH]
 49.97%  932.32ms         2  466.16ms  344.89ms  587.42ms  [CUDA memcpy HtoD]

==5172== API calls:
Time(%)      Time     Calls       Avg       Min       Max  Name
 46.60%  1.86597s         4  466.49ms  327.36ms  606.15ms  cudaMemcpy
 35.43%  1.41863s         2  709.31ms  632.94ms  785.69ms  cudaMallocHost
 17.89%  716.33ms         2  358.17ms  346.14ms  370.19ms  cudaFreeHost
  0.04%  1.5572ms         1  1.5572ms  1.5572ms  1.5572ms  cudaMalloc
  0.02%  708.41us         1  708.41us  708.41us  708.41us  cudaFree
  0.01%  203.58us         1  203.58us  203.58us  203.58us  cudaGetDeviceProperties
  0.00%  187.55us         1  187.55us  187.55us  187.55us  cuDeviceTotalMem
  0.00%  162.41us        91  1.7840us     105ns  61.874us  cuDeviceGetAttribute
  0.00%  79.979us         4  19.994us  1.9580us  73.537us  cudaEventSynchronize
  0.00%  77.074us         8  9.6340us  1.5860us  28.925us  cudaEventRecord
  0.00%  19.282us         1  19.282us  19.282us  19.282us  cuDeviceGetName
  0.00%  17.891us         4  4.4720us     629ns  8.6080us  cudaEventDestroy
  0.00%  16.348us         4  4.0870us     818ns  8.8600us  cudaEventCreate
  0.00%  7.3070us         4  1.8260us  1.7040us  2.0680us  cudaEventElapsedTime
  0.00%  1.6670us         3     555ns     128ns  1.2720us  cuDeviceGetCount
  0.00%     813ns         3     271ns     142ns     439ns  cuDeviceGet

2 个答案:

答案 0 :(得分:7)

在获得更多的tensorflow经验后,我意识到GPU的使用在很大程度上取决于网络规模,批量大小和预处理。使用具有更多转换层的更大网络(例如,Resnet样式)会增加GPU使用率,因为涉及更多计算,并且通过传输数据等产生更少的开销(与计算相关)。

答案 1 :(得分:3)

当将图像加载到GPU时,一个潜在的瓶颈是CPU和GPU之间的PCI Express总线使用。您可以使用一些tools to measure it

另一个潜在的瓶颈是磁盘IO,我在你的代码中没有看到任何会导致它的东西,但是关注它总是一个好主意。