OpenMP:基于NUMA的分裂循环

时间:2014-07-25 14:12:55

标签: multithreading performance openmp affinity numa

我使用8个OpenMP线程运行以下循环:

float* data;
int n;

#pragma omp parallel for schedule(dynamic, 1) default(none) shared(data, n)
for ( int i = 0; i < n; ++i )
{
    DO SOMETHING WITH data[i]
}

由于NUMA,我想用线程0,1,2,3运行循环的前半部分(i = 0,...,n / 2-1) 和线程4,5,6,7的后半部分(i = n / 2,...,n-1)。

基本上,我想并行运行两个循环,每个循环使用一组独立的OpenMP线程。

如何使用OpenMP实现此目的?

谢谢

PS:理想情况下,如果来自一个组的线程完成了它们的一半循环,而另一半循环仍未完成,我希望完成组的线程加入未完成的组处理另一半循环。

我正在考虑类似下面的内容,但我想知道我是否可以使用OpenMP执行此操作并且无需额外的簿记:

int n;
int i0 = 0;
int i1 = n / 2;

#pragma omp parallel for schedule(dynamic, 1) default(none) shared(data,n,i0,i1)
for ( int i = 0; i < n; ++i )
{
    int nt = omp_get_thread_num();
    int j;
    #pragma omp critical
    {
        if ( nt < 4 ) {
            if ( i0 < n / 2 ) j = i0++; // First 4 threads process first half
            else              j = i1++; // of loop unless first half is finished
        }
        else {
            if ( i1 < n ) j = i1++;  // Second 4 threads process second half
            else          j = i0++;  // of loop unless second half is finished 
        }
    }

    DO SOMETHING WITH data[j]
}

1 个答案:

答案 0 :(得分:5)

最好的方法是使用嵌套并行化,首先是NUMA节点,然后是每个节点;那么你可以使用dynamic的基础设施,同时仍然在线程组之间打破数据:

#include <omp.h>
#include <stdio.h>

int main(int argc, char **argv) {

    const int ngroups=2;
    const int npergroup=4;
    const int ndata = 16;

    omp_set_nested(1);
    #pragma omp parallel for num_threads(ngroups)
    for (int i=0; i<ngroups; i++) {
        int start = (ndata*i+(ngroups-1))/ngroups;
        int end  = (ndata*(i+1)+(ngroups-1))/ngroups;    

        #pragma omp parallel for num_threads(npergroup) shared(i, start, end) schedule(dynamic,1)
        for (int j=start; j<end; j++) {
            printf("Thread %d from group %d working on data %d\n", omp_get_thread_num(), i, j);
        }
    }

    return 0;
}

运行此功能

$ gcc -fopenmp -o nested nested.c -Wall -O -std=c99
$ ./nested | sort -n -k 9
Thread 0 from group 0 working on data 0
Thread 3 from group 0 working on data 1
Thread 1 from group 0 working on data 2
Thread 2 from group 0 working on data 3
Thread 1 from group 0 working on data 4
Thread 3 from group 0 working on data 5
Thread 3 from group 0 working on data 6
Thread 0 from group 0 working on data 7
Thread 0 from group 1 working on data 8
Thread 3 from group 1 working on data 9
Thread 2 from group 1 working on data 10
Thread 1 from group 1 working on data 11
Thread 0 from group 1 working on data 12
Thread 0 from group 1 working on data 13
Thread 2 from group 1 working on data 14
Thread 0 from group 1 working on data 15

但请注意,嵌套方法可能会改变线程分配而不是单级线程,因此您可能需要更多地使用KMP_AFFINITY或其他机制来重新绑定绑定。