相机空间来自深度纹理的法线

时间:2015-03-01 16:29:27

标签: opengl glsl lighting normals deferred-rendering

我想使用第一遍的存储(非线性)深度纹理来产生屏幕空间法线。在第二遍中,我可以渲染深度,漫反射,ID等,但我似乎无法通过深度工作获得法线。

目前对从深度中获取法线的理解:

    当前tex坐标为
  1. texture() / texelFetch() pp +(1,0)= p1,{{1 }} +(0,1)= p;从这些

  2. 重建相机空间位置
  3. 获取p2向量v1:当前位置与其屏幕/纹理空间x邻居之间的向量

  4. 获取p1 - p向量v2:当前位置与其屏幕/纹理空间y邻居之间的向量
  5. p2 - p产生这两个和cross以获得当前纹素的曲面法线。
  6. 着色器(第二遍!渲染到全屏四边形)

    顶点:

    normalize()

    片段:

    #version 330 core
    
    layout (location = 0) in vec2 position;
    layout (location = 2) in vec2 texcoord;
    
    out vec2 texcoordFrag;
    
    void main()
    {
        gl_Position = vec4(position.x, position.y, 0, 1);
        texcoordFrag = texcoord;
    }
    

    结果

    enter image description here

    问题我做错了什么?解释它就像你想要一个五岁的孩子一样。:)

    这将用作SSAO和定向照明的基础。

    据我所知,#version 330 core uniform sampler2D tex; uniform mat4 vpInv; in vec2 texcoordFrag; layout(location = 0) out vec4 fragmentColor; float near = 0.1; float far = 100.0; float linearizeDepth(float depth) { float nearToFarDistance = far - near; return (2.0 * near) / (far + near - depth * nearToFarDistance); //http://www.ozone3d.net/blogs/lab/20090206/how-to-linearize-the-depth-value/ //http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/ } vec3 worldSpacePositionFromDepth(in float depth) { vec4 clipSpacePos; clipSpacePos.xy = texcoordFrag * 2.0 - 1.0; clipSpacePos.z = texture(tex, texcoordFrag).r * 2.0 - 1.0; clipSpacePos.w = 1.0; vec4 homogenousPos = vpInv * clipSpacePos; return homogenousPos.xyz / homogenousPos.w; } vec3 viewSpacePositionFromDepth(in float depth) //PositionFromDepth_DarkPhoton(): https://www.opengl.org/discussion_boards/showthread.php/176040-Render-depth-to-texture-issue { vec2 ndc; // Reconstructed NDC-space position vec3 eye; // Reconstructed EYE-space position float top = 0.05463024898; //per 1 radianm FoV & distance of 0.1 float bottom = -top; float right = top * 1024.0 / 768.0; float left = -right; float width = 1024.0; float height = 768.0; float widthInv = 1.0 / width; float heightInv = 1.0 / height; ndc.x = (texcoordFrag.x - 0.5) * 2.0; ndc.y = (texcoordFrag.y - 0.5) * 2.0; eye.z = linearizeDepth(depth); //eye.z = near * far / ((depth * (far - near)) - far); //original eye.x = (-ndc.x * eye.z) * right/near; eye.y = (-ndc.y * eye.z) * top/near; return eye; } vec3 getNormal(vec3 p1, vec3 p2) { vec3 normal = cross(p2, p1); normal.z = -normal.z; return normalize(normal) * 0.5 + 0.5; } void main() { float depth = texture(tex, texcoordFrag.st).r; float depth1 = texture(tex, texcoordFrag.st + vec2(1.0/1024.0, 0)).r; float depth2 = texture(tex, texcoordFrag.st + vec2(1.0/768.0 , 0)).r; float depthLinear = linearizeDepth(depth); if (depthLinear > 1.0) discard; //fragmentColor = vec4(depth, depth, depth, 1.0f); fragmentColor = vec4(depthLinear, depthLinear, depthLinear, 1.0); //vec3 w = worldSpacePositionFromDepth(depth); //fragmentColor = vec4(w, 1.0); vec3 p = viewSpacePositionFromDepth(depth); vec3 p1 = viewSpacePositionFromDepth(depth1); vec3 p2 = viewSpacePositionFromDepth(depth2); vec3 v1 = (p1-p); vec3 v2 = (p2-p); vec3 normal = getNormal(v1, v2); fragmentColor = vec4(normal, 1.0); } dFdx不是答案,因为我处理的是纹素,而不是片段。

1 个答案:

答案 0 :(得分:4)

您通过p计算p1p2viewSpacePositionFromDepth的方式肯定是错误的。该函数对所有三个点使用相同的 texcoordFrag,只是具有不同的深度,因此所有三个点都将位于一条线上。 getNormal中的叉积应该只为任何像素产生0 - 你在这里看到的只是数值不稳定性,通过将结果向量归一化为单位长度来放大。