使用nvprof监视GPU性能不起作用

时间:2019-07-10 13:21:01

标签: nvprof

我正在尝试使用nvprof来监视GPU的性能。我想知道HtoD(主机到设备),DtoH(设备到主机)和设备执行的时间。 它与numba cuda网站上的标准代码配合使用效果很好:

from numba import cuda

@cuda.jit
def add_kernel(x, y, out):
    tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
    ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

    block_size = cuda.blockDim.x  # number of threads per block
    grid_size = cuda.gridDim.x    # number of blocks in the grid

    start = tx + ty * block_size
    stride = block_size * grid_size

    # assuming x and y inputs are same length
    for i in range(start, x.shape[0], stride):
        out[i] = x[i] + y[i]

if __name__ == "__main__":
    import numpy as np

    n = 100000
    x = np.arange(n).astype(np.float32)
    y = 2 * x
    out = np.empty_like(x)

    threads_per_block = 128
    blocks_per_grid = 30

    add_kernel[blocks_per_grid, threads_per_block](x, y, out)
    print(out[:10])

这是nvprfo的结果:

nvprof worked

但是,当我通过以下代码添加多处理的用法时:

import multiprocessing as mp
from numba import cuda

def fun():

    @cuda.jit
    def add_kernel(x, y, out):
        tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
        ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

        block_size = cuda.blockDim.x  # number of threads per block
        grid_size = cuda.gridDim.x    # number of blocks in the grid

        start = tx + ty * block_size
        stride = block_size * grid_size

        # assuming x and y inputs are same length
        for i in range(start, x.shape[0], stride):
            out[i] = x[i] + y[i]

    import numpy as np

    n = 100000
    x = np.arange(n).astype(np.float32)
    y = 2 * x
    out = np.empty_like(x)

    threads_per_block = 128
    blocks_per_grid = 30

    add_kernel[blocks_per_grid, threads_per_block](x, y, out)
    print(out[:10])
    return out


# check gpu condition
p = mp.Process(target = fun)
p.daemon = True
p.start()
p.join()

nvprof似乎正在监视该过程,但是尽管它报告nvprof正在分析,但它并没有产生任何结果:

nvprof does not record

此外,当我使用Ray(用于进行分布式计算的软件包)时:

if __name__ == "__main__":

    import multiprocessing

    def fun():

        from numba import cuda
        import ray

        @ray.remote(num_gpus=1)
        def call_ray():
            @cuda.jit
            def add_kernel(x, y, out):
                tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
                ty = cuda.blockIdx.x  # Similarly, this is the unique block ID within the 1D grid

                block_size = cuda.blockDim.x  # number of threads per block
                grid_size = cuda.gridDim.x    # number of blocks in the grid

                start = tx + ty * block_size
                stride = block_size * grid_size

                # assuming x and y inputs are same length
                for i in range(start, x.shape[0], stride):
                    out[i] = x[i] + y[i]

            import numpy as np

            n = 100000
            x = np.arange(n).astype(np.float32)
            y = 2 * x
            out = np.empty_like(x)

            threads_per_block = 128
            blocks_per_grid = 30

            add_kernel[blocks_per_grid, threads_per_block](x, y, out)
            print(out[:10])
            return out


        ray.shutdown()
        ray.init(redis_address = "***")
        out = ray.get(call_ray.remote())

    # check gpu condition
    p = multiprocessing.Process(target = fun)
    p.daemon = True
    p.start()
    p.join()

nvprof没有显示任何内容!它甚至不显示告诉nvprof对进程进行概要分析的行(但是代码确实已执行):

nvprof does not work

有人知道如何解决吗?还是我还有其他选择来获取这些数据以进行分布式计算?

0 个答案:

没有答案