在实施SSLR时,我遇到了对象显示不正确的问题:对象被无限投影“向下”并且根本没有在镜子中显示。我在下面提供了代码和屏幕截图。
片段SSLR着色器:
#version 330 core
uniform sampler2D normalMap; // in view space
uniform sampler2D depthMap; // in view space
uniform sampler2D colorMap;
uniform sampler2D reflectionStrengthMap;
uniform mat4 projection;
uniform mat4 inv_projection;
in vec2 texCoord;
layout (location = 0) out vec4 fragColor;
vec3 calcViewPosition(in vec2 texCoord) {
// Combine UV & depth into XY & Z (NDC)
vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r);
// Convert from (0, 1) range to (-1, 1)
vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1);
// Undo Perspective transformation to bring into view space
vec4 ViewPosition = inv_projection * ScreenSpacePosition;
// Perform perspective divide and return
return ViewPosition.xyz / ViewPosition.w;
}
vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
dir *= 0.25f;
for (int i = 0; i < 20; i++) {
hitCoord += dir;
vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
float depth = calcViewPosition(projectedCoord.xy).z;
dDepth = hitCoord.z - depth;
if(dDepth < 0.0) return projectedCoord.xy;
}
return vec2(-1.0);
}
void main() {
vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0;
vec3 viewPos = calcViewPosition(texCoord);
// Reflection vector
vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));
// Ray cast
vec3 hitPos = viewPos;
float dDepth;
float minRayStep = 0.1f;
vec2 coords = rayCast(reflected * max(minRayStep, -viewPos.z), hitPos, dDepth);
if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r);
else fragColor = texture(colorMap, texCoord);
}
此外,灯根本不反射
我会很感激
更新:
更新:我用错误的反射解决了问题,但仍然存在问题。
我如下解决了该问题:ViewPosition.y *= -1
现在,如您在屏幕快照中所见,由于某些原因,对象的下部未得到反映。
问题仍然悬而未决。
答案 0 :(得分:2)
我也在努力获得很好的ssr。我发现有两点可以帮助您。
要获取视图空间法线,您只需保持摄像机旋转并移除平移,因为如果不这样做,您将使法线拉伸到与摄像机运动相反的方向,并且不再即使再次将它们标准化也要有正确的方向,对于主列mat4,您可以像这样进行操作:
mat4 viewNoTranslation =视图; viewNoTranslation [3] = vec4(0.0,0.0,0.0,1.0);
从深度图像进行的深度采样是对数的,如果将其线性化,则确实会得到从0到1的值,但是对于所需的精度而言它们是不准确的。我试图直接从顶点着色器获取深度值:
gl_Position = ubo.projection * ubo.view * ubo.model * inPos; 深度= gl_Position.z;
我不知道是否正确,但是现在的深度更准确了。
如果要进步,请更新:)