OpenGL计算着色器中的样本深度缓冲区

时间:2014-01-27 15:37:29

标签: opengl glsl gpu

我正在尝试将深度纹理采样到计算着色器中并将其复制到其他纹理中。

问题是当我从深度纹理中读取时,我得不到正确的值:

我试图检查深度纹理的初始值是否正确(使用GDebugger),它们是。因此,imageLoad GLSL函数可以检索错误的值。

这是我的GLSL Compute着色器:

layout (binding=0, r32f) readonly uniform image2D depthBuffer;
layout (binding=1, rgba8) writeonly uniform image2D colorBuffer;

// we use 16 * 16 threads groups
layout (local_size_x = 16, local_size_y = 16) in;

void    main()
{
    ivec2       position = ivec2(gl_GlobalInvocationID.xy);
    // Sampling from the depth texture
    vec4        depthSample = imageLoad(depthBuffer, position);
    // We linearize the depth value
    float       f = 1000.0;
    float       n = 0.1;
    float       z = (2 * n) / (f + n - depthSample.r * (f - n));
    // even if i try to call memoryBarrier(), barrier() or memoryBarrierShared() here, i still have the same bug
    // and finally, we try to create a grayscale image of the depth values
    imageStore(colorBuffer, position, vec4(z, z, z, 1));
}

这就是我创建深度纹理和颜色纹理的方法:

// generate the deth texture
glGenTextures(1, &_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, wDimensions.x, wDimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);

// generate the color texture
glGenTextures(1, &_colorTexture);
glBindTexture(GL_TEXTURE_2D, _colorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wDimensions.x, wDimensions.y, 0, GL_RGBA, GL_FLOAT, NULL);

我用深度值填充深度纹理(将其绑定到帧缓冲区并渲染场景)然后我以这种方式调用我的计算着色器:

_computeShader.use();

// try to synchronize with the previous pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
// even if i call glFinish() here, the result is the same

glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, _colorTexture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);

glDispatchCompute((wDimensions.x + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE,
                  (wDimensions.y + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE, 1); // we divide the compute into groups of 16 threads

// try to synchronize with the next pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);

使用:

  1. wDimensions =上下文(和帧缓冲区)的大小
  2. WORK_GROUP_SIZE = 16
  3. 你知道为什么我没有得到有效的深度值吗?

    编辑:

    这是我渲染球体时颜色纹理的样子:

    IMG http://i41.tinypic.com/2rqll4l.png

    似乎glClear(GL_DEPTH_BUFFER_BIT)没有做任何事情: 即使我在glDispatchCompute()之前将其称为juste,我仍然拥有相同的图像...... 怎么可能呢?

1 个答案:

答案 0 :(得分:8)

实际上,我发现即使使用readonly关键字,也无法将深度纹理作为图像发送到计算着色器。

所以我已经取代了:

glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);

由:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _depthTexture);

并在我的计算着色器中:

layout (binding=0, r32f) readonly uniform image2D depthBuffer;

由:

layout (binding = 0) uniform sampler2D depthBuffer;

并试样我只写:

ivec2       position = ivec2(gl_GlobalInvocationID.xy);
vec2        screenNormalized = vec2(position) / vec2(ctxSize); // ctxSize is the size of the depth and color textures
vec4        depthSample = texture2D(depthBuffer, screenNormalized);

它的效果非常好