结合mmap和UVM功能

时间:2018-12-12 21:16:48

标签: memory-management cuda mmap memory-mapped-files memory-mapping

是否有同时提供这些功能的功能?我正在寻找一个分配内存的函数,该内存具有“内存映射”(例如,用mmap分配)和UVM(可从主机和GPU设备访问)的特征。我看到cudaHostAlloc在设备可以访问的主机内存上分配了一个内存,但是没有明显的方法将分配的内存范围声明为内存映射!

我的问题是:是否有API函数分配具有上述特征的内存?

如果上述问题的答案为“否”,那么,我是否可以调用一组API函数来导致相同的行为?

例如,首先,我们使用cudaMallocManaged分配基于UVM的内存,然后使用特定的API(POSIX或CUDA API)将先前分配的内存声明为“内存映射”(就像mmap)?或者,反之亦然(用mmap分配,然后将范围声明为CUDA驱动程序的UVM)?

任何其他建议也将不胜感激!


2018年12月13日更新

不幸的是,@ tera提供的建议似乎没有按预期工作。在设备上执行代码后,设备似乎看不到主机上的内存!

下面是我在编译命令中使用的代码。

#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
#include <assert.h>


__global__
void touchKernel(char *d, char init, int n) {
    int index =  blockIdx.x *blockDim.x + threadIdx.x;
    if(index >= n)
        return;
    d[index] = init;
}


void process_file(char* filename, int n) {
    if(n < 0) {
        printf("Error in n: %d\n", n);
        exit(1);
    }
    size_t filesize = n*sizeof(char);
    size_t pagesize = (size_t) sysconf (_SC_PAGESIZE);

    //Open file
    int fd = open(filename, O_RDWR|O_CREAT, 0666);
    // assert(fd != -1);
    if(fd == -1) {
        perror("Open API");
        exit(1);
    }
    ftruncate(fd, filesize);

    //Execute mmap
    char* mmappedData = (char*) mmap(0, filesize, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_LOCKED, fd, 0);
    assert(mmappedData != MAP_FAILED);
    printf("mmappedData: %p\n", mmappedData);

    for(int i=0;i<n;i++)
        mmappedData[i] = 'z';

    if(cudaSuccess != cudaHostRegister(mmappedData, filesize, cudaHostRegisterDefault)) {
        printf("Unable to register with CUDA!\n");
        exit(1);
    }

    int vec = 256;
    int gang = (n) / vec + 1;
    printf("gang: %d - vec: %d\n", gang, vec);
    touchKernel<<<gang, vec>>>((char*) mmappedData, 'a', n);
    cudaDeviceSynchronize();

    //Cleanup
    int rc = munmap(mmappedData, filesize);
    assert(rc == 0);


    close(fd);
}


int main(int argc, char const *argv[])
{
    process_file("buffer.obj", 10);

    return 0;
}

要编译,这里是:

nvcc -g -O0 f1.cu && cuda-memcheck ./a.out

cuda-memcheck会生成一些与用户有关的输出,类似于线程无法到达内存地址,类似于以下输出:

========= Invalid __global__ write of size 1
=========     at 0x000000b0 in touchKernel(char*, char, int)
=========     by thread (2,0,0) in block (0,0,0)
=========     Address 0x7fdc8e137002 is out of bounds
=========     Device Frame:touchKernel(char*, char, int) (touchKernel(char*, char, int) : 0xb0)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2cd) [0x24d9dd]
=========     Host Frame:./a.out [0x22b22]
=========     Host Frame:./a.out [0x22d17]
=========     Host Frame:./a.out [0x570d5]
=========     Host Frame:./a.out [0x6db8]
=========     Host Frame:./a.out [0x6c76]
=========     Host Frame:./a.out [0x6cc3]
=========     Host Frame:./a.out [0x6a4c]
=========     Host Frame:./a.out [0x6ade]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./a.out [0x673a]
=========
========= Invalid __global__ write of size 1
=========     at 0x000000b0 in touchKernel(char*, char, int)
=========     by thread (1,0,0) in block (0,0,0)
=========     Address 0x7fdc8e137001 is out of bounds
=========     Device Frame:touchKernel(char*, char, int) (touchKernel(char*, char, int) : 0xb0)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2cd) [0x24d9dd]
=========     Host Frame:./a.out [0x22b22]
=========     Host Frame:./a.out [0x22d17]
=========     Host Frame:./a.out [0x570d5]
=========     Host Frame:./a.out [0x6db8]
=========     Host Frame:./a.out [0x6c76]
=========     Host Frame:./a.out [0x6cc3]
=========     Host Frame:./a.out [0x6a4c]
=========     Host Frame:./a.out [0x6ade]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./a.out [0x673a]
=========
========= Invalid __global__ write of size 1
=========     at 0x000000b0 in touchKernel(char*, char, int)
=========     by thread (0,0,0) in block (0,0,0)
=========     Address 0x7fdc8e137000 is out of bounds
=========     Device Frame:touchKernel(char*, char, int) (touchKernel(char*, char, int) : 0xb0)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2cd) [0x24d9dd]
=========     Host Frame:./a.out [0x22b22]
=========     Host Frame:./a.out [0x22d17]
=========     Host Frame:./a.out [0x570d5]
=========     Host Frame:./a.out [0x6db8]
=========     Host Frame:./a.out [0x6c76]
=========     Host Frame:./a.out [0x6cc3]
=========     Host Frame:./a.out [0x6a4c]
=========     Host Frame:./a.out [0x6ade]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./a.out [0x673a]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaDeviceSynchronize. 
=========     Saved host backtrace up to driver entry point at error
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 [0x351c13]
=========     Host Frame:./a.out [0x40a16]
=========     Host Frame:./a.out [0x6a51]
=========     Host Frame:./a.out [0x6ade]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./a.out [0x673a]
=========

上面的输出表示代码未在设备上成功执行。

有什么建议吗?


2018年12月14日更新

我将代码更改为以下代码:

__global__
void touchKernel(char *d, char init, int n) {
    int index =  blockIdx.x *blockDim.x + threadIdx.x;
    if(index >= n || index < 0)
        return;
    printf("index %d\n", index);
    d[index] = init + (index%20);
    printf("index %d - Done\n", index);
}

如果上述代码被旧代码替换,则可以看到两个printf命令的输出。如果检查buffer.obj文件,他们可以看到该文件包含正确的输出!


2018年12月14日更新

可能cuda-memcheck有一些问题。事实证明,如果可执行文件是没有 cuda-memcheck执行的,那么buffer.obj的内容就完全正确。但是,如果可执行文件是使用 cuda-memcheck执行的,则输出文件(buffer.obj)的内容将完全不正确

1 个答案:

答案 0 :(得分:3)

巧合的是,我刚刚在Nvidia的论坛上回复了similar question

如果将MAP_LOCKED标志传递给cudaHostRegister(),则可以mmap()映射内存。

执行此操作时,您可能需要增加锁定内存的限制(bash中的{ulimit -m)。

更新: 事实证明,MAP_LOCKED flagmmap()甚至不是必需的。但是,cudaHostRegister()的文档还列出了其他一些限制:

  • 在没有统一虚拟地址的系统上,需要将cudaHostRegisterMapped标志传递到cudaHostRegister(),否则将不映射内存。除非设备的cudaDevAttrCanUseHostPointerForRegisteredMem属性的值非零,否则这还意味着您需要通过cudaHostGetDevicePointer()查询映射的内存范围的设备地址。
  • 必须使用cudaMapHost标志创建CUDA上下文,才能进行映射。由于上下文是由运行时API延迟创建的,因此您需要在调用运行时API之前使用驱动程序API自己创建上下文,以便能够影响创建上下文的标志。