我正在编写小型raytracer,我想围绕某些物体旋转相机。这样我就可以实现以下代码:
vecTemp是相机位置向量,vecT是第一个位置(我在for循环中运行此代码段)。偏航角是移动摄像机的弧度值角。
double yaw = degree * (PI / 180.f);
vecTemp.x() = cos(yaw) * vecT.x() - sin(yaw) * vecT.z();
vecTemp.z() = cos(yaw) * vecT.z() + sin(yaw) * vecT.x();
vecTemp.y() = 0;
sett.camera.buildCameraToWorld(vecTemp, { 0, 0, 0} );
sett.camera.rotateY(2*degree);
void buildCameraToWorld(const arma::dvec3& from, const arma::dvec3& to, const arma::dvec3& tmp = arma::dvec3{ 0, 1, 0 })
{
arma::dvec3 forward = arma::normalise(from - to);
arma::dvec3 right = arma::cross(arma::normalise(tmp), forward);
arma::dvec3 up = arma::cross(forward, right);
this->cameraToWorld.zeros();
this->cameraToWorld(0, 0) = right.x();
this->cameraToWorld(0, 1) = right.y();
this->cameraToWorld(0, 2) = right.z();
this->cameraToWorld(1, 0) = up.x();
this->cameraToWorld(1, 1) = up.y();
this->cameraToWorld(1, 2) = up.z();
this->cameraToWorld(2, 0) = forward.x();
this->cameraToWorld(2, 1) = forward.y();
this->cameraToWorld(2, 2) = forward.z();
this->cameraToWorld(3, 0) = from.x();
this->cameraToWorld(3, 1) = from.y();
this->cameraToWorld(3, 2) = from.z();
this->cameraToWorld = this->cameraToWorld.t();
}
void rotateY(double yaw)
{
yaw = yaw * (PI / 180);
this->cameraToWorld = this->cameraToWorld * arma::dmat44{ { cos(yaw), 0, sin(yaw), 0},
{ 0, 1, 0, 0},
{-sin(yaw), 0, cos(yaw), 0},
{ 0, 0, 0, 1} };
}
所以,现在我不明白,为什么平移后我会在Y轴上旋转-在buildCameraToWorld之后,我应该得到一个新的相机矩阵来查看我的原点,对吗?为什么我需要旋转双度值?
我发现了另一种实现方式-我可以旋转整个世界并且不像在OpenGL的相机中那样移动相机,但是对于光线追踪器来说这是个好主意吗?