OpenMP并行用于减慢我的代码(C语言)

时间:2018-03-21 13:57:35

标签: c openmp hpc

我正在尝试使用openMP来加速列表排名的并行版本。我的实现如下:

int ListRankingParallel(int *R1,int *S, int N)
{
int i;
int *Q = (int*)malloc(N * sizeof(int));

#pragma omp parallel for private(i)
for (i=0; i<N; i++){

    if( S[i] != -1)R1[i] = 1;
    else R1[i] = 0;
    Q[i] = S[i];

}

#pragma omp parallel for private(i)
for(i=0; i<N; i++)
    while (Q[i] != -1 & Q[Q[i]] != -1) {
        R1[i] = R1[i] + R1[Q[i]];
        Q[i] = Q[Q[i]];
    }

free(Q);

return *R1;
}

我的列表排名的序列版本是

int ListRankingSerial(int *R2,int *S, int N)
{
int temp;  
int j,i;
for( i=0; i<N; i++){
    j = 0;
    temp = S[i];
    while(S[i]!=-1)
    {
        j++;
        S[i] = S[S[i]];
    }
    R2[i] = j;
    S[i] = temp;
}

return *R2;
}

当我分别使用

运行它们时
get_walltime(&S1);
ListRankingParallel(R1,S,N);
get_walltime(&E1);

get_walltime(&S3);
ListRankingSerial(R3,S,N);
get_walltime(&E3);

如果我在Mac上运行我的代码,并行版本的运行速度明显快于串行版本。但是,如果我在另一个Linux集群上运行它,并行版本比串行版本慢两倍。

在我的Mac上,我使用

编译我的代码
gcc-7 -fopenmp <file name>.c 

在群集上,使用

gcc -fopenmp <file name>.c 

如果您想测试我的代码,请使用:

int main(){

int N = 1e+5;
int *S = (int*)malloc(N * sizeof(int));
int *R1 = (int*)malloc(N * sizeof(int));
int *R3 = (int*)malloc(N * sizeof(int));
double S1,S2,S3,E1,E2,E3;
int i;

for( i = 0; i < N; i++)
    S[i] = i+1;

S[N-1] = -1;

get_walltime(&S1);
ListRankingParallel(R1,S,N);
get_walltime(&E1);
printf("%f\n",E1-S1);

get_walltime(&S3);
ListRankingSerial(R3,S,N);
get_walltime(&E3);
printf("%f\n",E3-S3);

}

有人可以给我一些建议吗?谢谢!

1 个答案:

答案 0 :(得分:0)

Are you certain it is running on multiple threads?

You should either be setting the OMP_NUM_THREADS environment variable or calling omp_set_num_threads() at the start of main. You can get the total number of threads available using omp_get_max_threads() and do something like

max_threads = omp_get_max_threads()
omp_set_num_threads(max_threads)

See more information about setting the number of threads in this answer.

Edit: Also you can check how many threads are being used with omp_get_num_threads().