我正在实施目标聚光灯。我有光锥,掉落,所有这些下来,工作正常。问题在于,当我在太空中的某个点周围旋转相机时,照明似乎跟随它,即无论相机在哪里,光线始终与相机的角度相同。
这是我在顶点着色器中所做的事情:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
我有一个由Light_Position和Light_LookAt定义的目标聚光灯(看起来是聚光灯当然看到的空间点)。位置和外观都已经在眼睛空间。我通过从它们中减去相机位置来计算眼睛空间CPU侧。
在顶点着色器我然后继续从光位置到光的lookAt点,它通知像素着色器,其中光锥的主轴线是使光锥矢量。
此时我想知道我是否还必须转换矢量,如果是的话,那是什么?我已经尝试了视图矩阵的逆转置,没有运气。
有谁可以带我通过这个?
这是完整性的像素着色器:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
感谢下面的回复。我仍然无法解决这个问题。我现在正在将光线转换为眼睛空间CPU端。因此,不需要对光进行变换,但它仍然不起作用。
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
...在顶点着色器中,我正在这样做(下面)。据我所见,光在眼睛空间,顶点被转换为眼睛空间,光照矢量(attrib_Fragment_Light)在眼睛空间。然而,矢量永远不会改变。请原谅我有点厚!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
答案 0 :(得分:1)
您无法通过减去相机位置来计算眼睛位置,您必须乘以模型视图矩阵。
答案 1 :(得分:1)
在这里看起来你正在减去Light_Position
,我假设你想成为一个世界空间坐标(因为你看起来它目前在眼睛空间里感到沮丧),来自position
,这是眼睛空间矢量。
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
如果要减去两个向量,它们必须都在同一个坐标空间中。如果要在世界空间中进行照明计算,则应使用世界空间位置向量,而不是视图空间位置向量。
这意味着将attrib_Position
变量与模型矩阵相乘,而不是 ModelView 矩阵,并使用此向量作为灯光计算的基础。< / p>