使用#pragma omp任务的GCC v.4编译错误(private / firstprivate子句中不允许使用引用类型的变量)

时间:2016-12-13 14:36:09

标签: gcc compiler-errors task openmp gcc4

我正在将一个基于MPI的大型物理代码移植到OpenMP任务中。在一台Cray超级计算机上,代码编译,链接并完美运行(cray-mpich库,Cray编译器用于此)。然后,代码移动到服务器以进行Jenkins持续集成(我没有该服务器上的管理员权限),并且只有GCC v.4编译器(Cray编译器不能用作它&#39 ;不是Cray机器)。在该服务器上,我的代码未编译,出现错误:

... error: ‘pcls’ implicitly determined as ‘firstprivate’ has reference type
    #pragma omp task
            ^

这是意大利面条代码,所以很难复制粘贴代码行导致此错误,但我的猜测是这是由于此处描述的问题: http://forum.openmp.org/forum/viewtopic.php?f=5&t=117

有没有可能解决这个问题?看来GCC v.6已经解决了,但不确定......如果有人遇到这种情况,我很好奇...

UPD: 我正在提供一个函数的框架,其中一个这样的错误导致(抱歉长列表!):

void EMfields3D::sumMoments_vectorized(const Particles3Dcomm* part)
{
  grid_initialisation(...);
  #pragma omp parallel
  {
    for (int species_idx = 0; species_idx < ns; species_idx++)
    {
      const Particles3Dcomm& pcls = part[species_idx];
      assert_eq(pcls.get_particleType(), ParticleType::SoA);
      const int is = pcls.get_species_num();
      assert_eq(species_idx,is);

      double const*const x = pcls.getXall();
      double const*const y = pcls.getYall();
      double const*const z = pcls.getZall();
      double const*const u = pcls.getUall();
      double const*const v = pcls.getVall();
      double const*const w = pcls.getWall();
      double const*const q = pcls.getQall();

      const int nop = pcls.getNOP();
      #pragma omp master
      {
        start_timing_for_moments_accumulation(...);
      }
      ...
      #pragma omp for // because shared
        for(int i=0; i<moments1dsize; i++)
          moments1d[i]=0;

      // prevent threads from writing to the same location
      for(int cxmod2=0; cxmod2<2; cxmod2++)
        for(int cymod2=0; cymod2<2; cymod2++)
        // each mesh cell is handled by its own thread
          #pragma omp for collapse(2)
            for(int cx=cxmod2;cx<nxc;cx+=2)
              for(int cy=cymod2;cy<nyc;cy+=2)
                for(int cz=0;cz<nzc;cz++)
                  #pragma omp task
                  {
                    const int ix = cx + 1;
                    const int iy = cy + 1;
                    const int iz = cz + 1;
                    {
                      // reference the 8 nodes to which we will
                      // write moment data for particles in this mesh cell.
                      //
                      arr1_double_fetch momentsArray[8];
                      arr2_double_fetch moments00 = moments[ix][iy];
                      arr2_double_fetch moments01 = moments[ix][cy];
                      arr2_double_fetch moments10 = moments[cx][iy];
                      arr2_double_fetch moments11 = moments[cx][cy];
                      momentsArray[0] = moments00[iz]; // moments000 
                      momentsArray[1] = moments00[cz]; // moments001 
                      momentsArray[2] = moments01[iz]; // moments010 
                      momentsArray[3] = moments01[cz]; // moments011 
                      momentsArray[4] = moments10[iz]; // moments100 
                      momentsArray[5] = moments10[cz]; // moments101 
                      momentsArray[6] = moments11[iz]; // moments110 
                      momentsArray[7] = moments11[cz]; // moments111 

                      const int numpcls_in_cell = pcls.get_numpcls_in_bucket(cx,cy,cz);
                      const int bucket_offset = pcls.get_bucket_offset(cx,cy,cz);
                      const int bucket_end = bucket_offset+numpcls_in_cell;

                      some_manipulation_with_moments_accumulation(...);

                    }
                }

      #pragma omp master
      {
        end_timing_for_moments_accumulation(...);
      }

      // reduction
      #pragma omp master
      {
        start_timing_for_moments_reduction(...);
      }

      {
        #pragma omp for collapse(2)
          for(int i=0;i<nxn;i++)
          {
            for(int j=0;j<nyn;j++)
            {
              for(int k=0;k<nzn;k++)
                #pragma omp task
                {
                  rhons[is][i][j][k] = invVOL*moments[i][j][k][0];
                  Jxs  [is][i][j][k] = invVOL*moments[i][j][k][1];
                  Jys  [is][i][j][k] = invVOL*moments[i][j][k][2];
                  Jzs  [is][i][j][k] = invVOL*moments[i][j][k][3];
                  pXXsn[is][i][j][k] = invVOL*moments[i][j][k][4];
                  pXYsn[is][i][j][k] = invVOL*moments[i][j][k][5];
                  pXZsn[is][i][j][k] = invVOL*moments[i][j][k][6];
                  pYYsn[is][i][j][k] = invVOL*moments[i][j][k][7];
                  pYZsn[is][i][j][k] = invVOL*moments[i][j][k][8];
                  pZZsn[is][i][j][k] = invVOL*moments[i][j][k][9];
                }
            }
          }
      }

      #pragma omp master
      {
        end_timing_for_moments_reduction(...);
      }

    }

  }

  for (int i = 0; i < ns; i++)
  {
    communicateGhostP2G(i);
  }

}

请不要试图在这里找到一个逻辑(比如为什么有&#34; #pragma omp parallel&#34;然后for循环出现没有&#34; #pragma omp for&#34 ;;或者为什么在for循环中有一个任务构造)...我没有实现代码,但我必须将它移植到OpenMP任务...

0 个答案:

没有答案