OpenCL Cholesky分解

时间:2014-10-07 16:55:14

标签: algorithm opencl blas

我使用OpenCL实现了以下Cholesky分解算法。代码表现出随机行为。它只与cpu输出匹配一些次数。有人可以帮我弄清楚我的实施有什么问题。

以下是算法:

procedure CHOLESKY(A)
int i, j, k;
for k := 0 to n − 1 do /* 1st loop */
    /* Obtain the square root of the diagonal element. */
    A[k, k] := A[k, k];

    for j := k + 1 to n − 1 do /* 2nd loop */
    /* The division step. */
        A[k, j] := A[k, j]/A[k, k];
    end for

    for i := k + 1 to n − 1 do /* 3rd loop */
        for j := i to n − 1 do /* 4th loop */
            /* The elimination step. */
            A[i, j] := A[i, j] - A[k, i] × A[k, j];

        end for
    end for
end for

并行化上述算法的方法:

从算法中,消除步骤是最昂贵的。所以我有最外层的循环 在主机代码中,我在循环内调用内核。基本上只运行一次内核 对应于第三循环的单次迭代。因此,我启动(n-1) - (k + 1)+ 1个工作组。工作组中的工作项数设置为n / 2。第二个for循环也在内核中计算,但我只允许第一个工作组执行它。

相关主机代码

// for a 10 X 10 matrix, MATRIX_SIZE = 10
localWorkSize[0] = MATRIX_SIZE/2;
stride = MATRIX_SIZE/2;
cl_event             event;

for(k = 0; k < MATRIX_SIZE; k++)
{

    int isize = (MATRIX_SIZE-1) - (k+1) + 1;
    int num_blocks = isize;
    if(num_blocks <= 0)
            num_blocks = 1;
    globalWorkSize[0] = num_blocks * WA/2;

    errcode = clSetKernelArg(clKernel, 0, sizeof(int), (void *)&k);
    errcode |= clSetKernelArg(clKernel, 1, sizeof(cl_mem), (void *)&d_A);
    errcode |= clSetKernelArg(clKernel, 2, sizeof(int), (void *)&stride);

    errcode = clEnqueueNDRangeKernel(clCommandQueue,
          clKernel, 1, NULL, globalWorkSize,
          localWorkSize, 0, NULL, &event);

    OpenCL_CheckError(errcode, "clEnqueueNDRangeKernel");
    clFinish(clCommandQueue);
}

内核代码

__kernel void
batchedCholesky(__global float *U, int k, int stride)
{

    int tx = get_global_id(0);
    unsigned int j;
    unsigned int num_rows = MATRIX_SIZE;

    if(tx==0)
    {
            // Take the square root of the diagonal element
            U[k * num_rows + k] = sqrt(U[k * num_rows + k]);
    }
    barrier(CLK_GLOBAL_MEM_FENCE);

    int offset = (k+1); //From original loop
    int jstart = get_local_id(0) + offset;
    int jstep = stride;
    int jtop = num_rows - 1;

    int jbottom = (k + 1);
    //Do work for this i iteration
    //Division step
    if(get_group_id(0) == 0)
    {
            for(j = jstart; (j >= jbottom) && (j <= jtop); j+=jstep)
            {
                    U[k * num_rows + j] /= U[k * num_rows + k]; // Division step
            }
    }
    barrier(CLK_GLOBAL_MEM_FENCE);

    j = 0;

    int i = get_group_id(0) + (k+1);

    offset = i;

    jstart = get_local_id(0) + offset;

    jbottom = i;

    for( j = jstart; j >= jbottom && j <= jtop; j += jstep)

            U[i * num_rows + j] -= U[k * num_rows + i] * U[k * num_rows + j];

    barrier(CLK_GLOBAL_MEM_FENCE);   
}

1 个答案:

答案 0 :(得分:1)

并非所有工作项都在同一时间执行,它们可能会分批运行。因此,在CLK_GLOBAL_MEM_FENCE 之前运行的代码不会包含每个值。这可能是您错误的根源。

如果需要全局同步,请使用多个内核。