gpuR-解矩阵

时间:2019-03-01 17:44:58

标签: r

对于使用 R 语言的 gpuR 软件包的用户,是什么导致gpux反转代码比x反转速度慢?

set.seed(0)
x <- matrix(rnorm(10000,0,1),100,100)

system.time(
for(i in 1:1e4){
  solve(x)
})


library(gpuR)

set.seed(0)
x <- matrix(rnorm(10000,0,1),100,100)
gpux <-vclMatrix(x, 100, 100)


system.time(
for(i in 1:1e4){
  solve(gpux)
})
  • 在CPU中:10.746秒
  • 在GPU中:65.432秒

我认为,如果代码中没有问题,那么速度慢,更确切地说,数组gpux的定义是指数组的大小。另外,我想在循环的每次迭代中,都有GPU的局部环境矩阵的副本。

以下分别是我的CPU和GPU的信息:

CPU

[pedro@pedro-avell ~]$ lscpu
Arquitetura:                x86_64
Modo(s) operacional da CPU: 32-bit, 64-bit
Ordem dos bytes:            Little Endian
Tamanhos de endereço:       39 bits physical, 48 bits virtual
CPU(s):                     8
Lista de CPU(s) on-line:    0-7
Thread(s) per núcleo:       2
Núcleo(s) por soquete:      4
Soquete(s):                 1
Nó(s) de NUMA:              1
ID de fornecedor:           GenuineIntel
Família da CPU:             6
Modelo:                     60
Nome do modelo:             Intel(R) Core(TM) i7-4710MQ CPU @ 2.50GHz
Step:                       3
CPU MHz:                    1086.144
CPU MHz máx.:               3500,0000
CPU MHz mín.:               800,0000
BogoMIPS:                   4990.29
Virtualização:              VT-x
cache de L1d:               32K
cache de L1i:               32K
cache de L2:                256K
cache de L3:                6144K
CPU(s) de nó0 NUMA:         0-7
Opções:                     fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts flush_l1d

GPU

[pedro@pedro-avell deviceQuery]$ ./deviceQuery 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 970M"
  CUDA Driver Version / Runtime Version          10.0 / 10.0
  CUDA Capability Major/Minor version number:    5.2
  Total amount of global memory:                 6084 MBytes (6379536384 bytes)
  (10) Multiprocessors, (128) CUDA Cores/MP:     1280 CUDA Cores
  GPU Max Clock rate:                            1038 MHz (1.04 GHz)
  Memory Clock rate:                             2505 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS

最诚挚的问候。

1 个答案:

答案 0 :(得分:0)

我猜想仅将数据从CPU缓存移到GPU缓存会有开销,因此只有在矩阵足够大的情况下,并行化的好处才会显现。

快速分析证实了这一点:

library(gpuR)
library(microbenchmark)

n    <- 500
x    <- matrix(rnorm(n^2,0,1),n,n)
gpux <- vclMatrix(x)

microbenchmark(
  solve(x),
  solve(gpux)
)

在我的计算机上提供n=100

Unit: milliseconds
        expr      min       lq      mean   median       uq         max neval
    solve(x) 1.300824 1.352155  1.709623 1.395733 1.887031    5.620657   100
 solve(gpux) 3.099540 3.510344 17.595400 3.772625 4.319700 1348.374935   100

...但针对n=500

Unit: milliseconds
        expr      min        lq     mean   median       uq      max neval
    solve(x) 118.6044 121.10937 158.2209 141.3021 187.8337 343.9758   100
 solve(gpux)  39.7822  42.48962 110.5771 156.9022 172.0060 207.4688   100