我正在使用OpenGL图形引擎,我遇到了一个非常奇怪的问题。基本上我是通过Assimp导入.DAE场景(在Cinema4D中制作),其中还包含一个Camera。相机位于原点并向左旋转20度,向上旋转20度,因此立方体的一部分应显示在视口的右下角。
渲染时我首先通过将场景图中相机节点的世界变换矩阵应用于lookAt矩阵来计算“全局”lookAt矩阵:
cameraMatrix = transform * glm::lookAt(camera->position, camera->lookAt, camera->upward);
然后用它来计算最终网格的模型视图矩阵:
// mesh.second is the world matrix
mat4 modelvMatrix = renderList->cameraMatrix * mesh.second;
然后将与投影矩阵组合并馈送到着色器。然而,结果(纹理还没有工作)看起来像是“镜像的”,好像反过来应用了转换:
//cameraMatrix = transform * glm::lookAt(camera->position, camera->lookAt, camera->upward);
cameraMatrix = camera->getCameraMatrix(transform);
mat4 Camera::getCameraMatrix(mat4p transform)
{
auto invTr = glm::inverseTranspose(mat3(transform));
auto pos = vec3(transform * vec4(position, 1));
auto dir = invTr * glm::normalize(lookAt - position);
auto upw = invTr * upward;
return glm::lookAt(pos, pos + dir, upw);
}
似乎解决了这个问题:
但是我不确定输出是否完全正确,因为它不是第一张图像的完美镜面。摄像机节点的局部变换矩阵是:
mat4x4(
(0.939693, 0.000000, -0.342020, 0.000000),
(0.116978, 0.939693, 0.321394, 0.000000),
(0.321394, -0.342020, 0.883022, 0.000000),
(0.000000, -0.000000, 0.000000, 1.000000))
我应该如何正确计算相机矩阵?
修改
我一直在询问矩阵的微积分:
mat4 modelvMatrix = renderList->cameraMatrix * mesh.second;
mat4 renderMatrix = projectionMatrix * modelvMatrix;
shaderProgram->setMatrix("renderMatrix", renderMatrix);
mesh.first->render();
和着色器代码:
const std::string Source::VertexShader= R"(
#version 430 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 vertexTexCoord;
uniform mat4 renderMatrix;
out vec2 texCoord;
void main()
{
gl_Position = renderMatrix * vec4(position, 1.0);
texCoord = vertexTexCoord;
}
)";
const std::string Source::FragmentShader= R"(
#version 430 core
uniform sampler2D sampler;
in vec2 texCoord;
out vec3 color;
void main()
{
color = vec3(0.0, 1.0, 0.0);
//color = texture(sampler, texCoord);
}
)";