我有以下代码:
for (int i = 0; i < veryLargeArraySize; i++){
int value = A[i];
if (B[value] < MAX_VALUE) {
B[value]++;
}
}
我想在这里使用OpenMP工作共享构造,但我的问题是B数组上的同步 - 所有并行线程都可以访问数组B的任何元素,这是非常大的(因为我需要使用锁很困难)太多了)
#pragma omp critical 是一个严重的开销。由于if
。
有人对我如何做到这一点有很好的建议吗?
答案 0 :(得分:0)
这是我发现并完成的事情。
我在一些论坛上读到并行直方图计算通常是一个坏主意,因为它可能比顺序计算更慢,效率更低。
但是,我需要这样做(作业),所以我做的是以下内容:
并行处理A数组(图像)以确定实际值范围(直方图 - B数组) - 找到A [i]的MIN和MAX
int min_value, max_value;
#pragma omp for reduction(min:min_value), reduction(max:max_value)
for (i = 0; i < veryLargeArraySize; i++){
const unsigned int value = A[i];
if(max_value < value) max_value = value;
if(min_value > value) min_value = value;
}
int size_of_histo = max_value - min_value + 1;`
分配共享数组,例如:
int num_threads = omp_get_num_threads();
int* sharedHisto = (int*) calloc(num_threads * size_of_histo, sizeof(int));
每个线程都分配了sharedHisto的一部分,并且可以在没有同步的情况下更新它
int my_id = omp_get_thread_num();
#pragma omp parallel for default(shared) private(i)
for(i = 0; i < veryLargeArraySize; i++){
int value = A[i];
// my_id * size_of_histo positions to the begining of this thread's
// part of sharedHisto .
// i - min_value positions to the actual histo value
sharedHisto[my_id * size_of_histo + i - min_value]++;
}
现在,执行缩减(如此处所述:Reducing on array in OpenMp)
#pragma omp parallel
{
// Every thread is in charge for part of the reduced histogram
// shared_histo with the size: size_of_histo
int my_id = omp_get_thread_num();
int num_threads = omp_get_num_threads();
int chunk = (size_of_histo + num_threads - 1) / num_threads;
int start = my_id * chunk;
int end = (start + chunk > histo_dim) ? histo_dim : start + chunk;
#pragma omp for default(shared) private(i, j)
for(i = start; i < end; i++){
for(j = 0; j < num_threads; j++){
int value = B[i + minHistoValue] + sharedHisto[j * size_of_histo + i];
if(value > MAX_VALUE) B[i + min_value] = MAX_VALUE;
else B[i + min_value] = value;
}
}
}