为了能够确定用户是否点击了我的任何3D对象,我试图将点击的屏幕坐标转换为向量,然后我用它来检查我的三角形是否被击中。为此,我使用DirectX提供的XMVector3Unproject方法,并在C ++ / CX中实现所有内容。
我面临的问题是,取消投影屏幕坐标导致的矢量完全没有像我预期的那样。下图说明了这一点:
点击发生时的光标位置(以黄色突出显示)在左侧的等轴测视图中可见。一旦我点击,由于未投影而产生的矢量出现在图像中指示的模型后面,因为白线穿透模型。因此,不是在光标位置发起并进入等轴测视图中的屏幕,而是出现在完全不同的位置。
当我在等轴测视图中水平移动鼠标时,单击并在此之后垂直移动鼠标并单击下面的图案。两个图像中的所有线代表由点击产生的矢量。该模型已被删除,以提高可见度。
从上面的图像可以看出,所有矢量似乎都来自相同的位置。如果我更改视图并重复该过程,则会出现相同的模式,但矢量的原点不同。
以下是我用来提出这个问题的代码片段。首先,我使用下面的代码接收光标位置,并将其与绘图区域的宽度和高度一起传递给我的“SelectObject”方法:
void Demo::OnPointerPressed(Object^ sender, PointerEventArgs^ e)
{
Point currentPosition = e->CurrentPoint->Position;
if(m_model->SelectObject(currentPosition.X, currentPosition.Y, m_renderTargetWidth, m_renderTargetHeight))
{
m_RefreshImage = true;
}
}
“SelectObject”方法如下所示:
bool Model::SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
if(Intersects(rayOrigin, XMVector3Normalize(v - rayOrigin), v0, v1, v2, depth))
{
return true;
}
}
最终计算的向量由DirectX :: TriangleTests命名空间的Intersects方法使用,以检测三角形是否被击中。我省略了上面剪辑中的代码,因为它与此问题无关。
为了渲染这些图像,我使用了正交投影矩阵和一个可以围绕其局部x轴和y轴旋转的摄像机,从而生成视图矩阵。世界矩阵始终保持不变,即它只是一个单位矩阵。
视图矩阵计算如下(基于Frank Luna的3D游戏编程书中的例子):
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis , m_cameraXAxis);
XMStoreFloat3(&cameraYAxis , m_cameraYAxis);
XMStoreFloat3(&cameraZAxis , m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
它受两种方法的影响,这两种方法围绕相机的x轴和y轴旋转相机:
void Camera::ChangeCameraPitch(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraXAxis, angle);
m_cameraYAxis = XMVector3TransformNormal(m_cameraYAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
void Camera::ChangeCameraYaw(float angle)
{
XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraYAxis, angle);
m_cameraXAxis = XMVector3TransformNormal(m_cameraXAxis, rotationMatrix);
m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}
世界/模型矩阵和投影矩阵计算如下:
void Model::SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->projection, XMMatrixTranspose(orthographicProjectionMatrix * orientationMatrix));
}
void Model::SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->model, XMMatrixTranspose(orientationMatrix));
}
坦率地说,我还没有理解我面临的问题。如果有更深入见解的人能给我一些关于我需要应用更改的提示,以便从非投影计算的向量从光标位置开始并移动到屏幕中,我将不胜感激。
修改1 :
我认为这与我的相机在世界坐标中位于(0,0,0)的事实有关。相机围绕其局部x轴和y轴旋转。根据我的理解,相机创建的视图矩阵构建了投影图像的平面。如果是这种情况,它可以解释为什么光线会以某种方式出现"意外"位置。
我的假设是我需要将相机移出中心,使其位于物体外部。但是,如果只是修改摄像机的成员变量m_cameraPosition
,我的模型就会完全扭曲。
任何有能力并愿意提供帮助的人?
答案 0 :(得分:6)
谢谢你的提示,卡皮尔。我尝试了XMMatrixLookAtRH
方法,但是无法使用这种方法改变相机的俯仰/偏航,所以我放弃了这种方法并想出了自己生成矩阵。
解决了我的问题是在将模型,视图和投影矩阵传递给XMMatrixTranspose
之前使用XMVector3Unproject
转置矩阵。所以不要使代码如下
XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
XMMATRIX viewMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
XMMATRIX modelMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
它需要
XMMATRIX projectionMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection));
XMMATRIX viewMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view));
XMMATRIX modelMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model));
XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
我不完全清楚为什么在将矩阵传递给unproject方法之前我需要转置矩阵。但是,我怀疑它与我移动相机时遇到的问题有关。这个问题已经通过this发布在StackOverflow上进行了描述。
我还没有设法解决这个问题。简单地转置视图矩阵并不能解决它。但是,我的主要问题已经解决,我的模型终于可以点击了。
如果有人有任何需要添加的内容,并且可以了解为什么需要调换矩阵或为什么移动相机会扭曲模型,请继续发表评论或答案。
答案 1 :(得分:4)
我使用XMMatrixLookAtRH
函数中的Model::SetViewMatrix()
API来计算视图矩阵,并得到v
和rayOrigin
向量的合适值。
例如:
XMStoreFloat4x4(
&m_modelViewProjectionConstantBufferData->view,
XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f),
XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f))
);
虽然我还没有能够在屏幕上显示输出,但我通过在控制台应用程序中计算简单值来检查结果,并且矢量值似乎是正确的。请检查您的申请并确认。
注意:您必须提供焦点和向上向量参数才能使用XMMatrixLookAtRH
API代替您当前的方法
答案 2 :(得分:3)
我可以使用v
方法获得rayOrigin
和XMMatrixLookAtRH
向量的相等值以及使用此代码的自定义视图矩阵,而无需矩阵转置操作:
#include <directxmath.h>
using namespace DirectX;
XMVECTOR m_cameraXAxis;
XMVECTOR m_cameraYAxis;
XMVECTOR m_cameraZAxis;
XMVECTOR m_cameraPosition;
XMMATRIX gView;
XMMATRIX gView2;
XMMATRIX gProj;
XMMATRIX gModel;
void SetViewMatrix()
{
XMVECTOR lTarget = XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f);
m_cameraPosition = XMVectorSet(1.0f, 1.0f, 1.0f, 1.0f);
m_cameraZAxis = XMVector3Normalize(XMVectorSubtract(m_cameraPosition, lTarget));
m_cameraXAxis = XMVector3Normalize(XMVector3Cross(XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f), m_cameraZAxis));
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(1, 0) = cameraXAxis.y;
viewMatrix(2, 0) = cameraXAxis.z;
viewMatrix(3, 0) = x;
viewMatrix(0, 1) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(2, 1) = cameraYAxis.z;
viewMatrix(3, 1) = y;
viewMatrix(0, 2) = cameraZAxis.x;
viewMatrix(1, 2) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(3, 2) = z;
viewMatrix(0, 3) = 0.0f;
viewMatrix(1, 3) = 0.0f;
viewMatrix(2, 3) = 0.0f;
viewMatrix(3, 3) = 1.0f;
gView = XMLoadFloat4x4(&viewMatrix);
gView2 = XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f),
XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f));
//m_modelViewProjectionConstantBufferData->view = viewMatrix;
printf("yo");
}
void SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
gProj = XMMatrixTranspose( XMMatrixMultiply(orthographicProjectionMatrix, orientationMatrix));
}
void SetModelMatrix()
{
XMFLOAT4X4 orientation = XMFLOAT4X4
(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
XMMATRIX orientationMatrix = XMMatrixTranspose( XMLoadFloat4x4(&orientation));
gModel = orientationMatrix;
}
bool SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
XMMATRIX projectionMatrix = gProj;
XMMATRIX viewMatrix = gView;
XMMATRIX modelMatrix = gModel;
XMMATRIX viewMatrix2 = gView2;
XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix,
modelMatrix);
// Code to retrieve v0, v1 and v2 is omitted
auto diff = v - rayOrigin;
auto diffNorm = XMVector3Normalize(diff);
XMVECTOR v2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
XMVECTOR rayOrigin2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
0.0f,
0.0f,
screenWidth,
screenHeight,
0.0f,
1.0f,
projectionMatrix,
viewMatrix2,
modelMatrix);
auto diff2 = v2 - rayOrigin2;
auto diffNorm2 = XMVector3Normalize(diff2);
printf("hi");
return true;
}
int main()
{
SetViewMatrix();
SetProjectionMatrix(1000, 1000, 0.0f, 1.0f);
SetModelMatrix();
SelectObject(500, 500, 1000, 1000);
return 0;
}
请使用此代码检查您的申请并确认。您将看到代码与您之前的代码相同。唯一的补充是摄像机参数的初始值,使用SetViewMatrix()
方法计算XMMatrixLookAtRH
中的第二视图矩阵,以及使用SelectObject()
中的两个视图矩阵计算矢量。
无需转置
我没有必要转置任何矩阵。 Projection 和 Model 矩阵不应该需要转置,因为它们都是对角矩阵,转置它们会得到相同的矩阵。我也不认为需要转换查看矩阵。 XMMatrixLookAtRH
解释here的公式提供了与您的视图矩阵完全相同的视图矩阵。此外,给定here的示例项目在检查交集时不会转置其矩阵。您可以下载并检查示例项目。
可能的问题来源
1)初始化:我唯一无法看到的代码是您初始化m_cameraZAxis
,m_cameraXAxis
,nearZ
,farZ
参数等。此外,我还没有使用你的相机旋转功能。如您所见,我通过使用位置,目标和方向向量进行初始化相机进行计算。请检查m_cameraZAxis
的初始计算是否符合我的示例代码。
2)LH / RH外观:确保代码中的左手和右手不会意外混淆。
3)检查您的轮换代码(ChangeCameraPitch
或ChangeCameraYaw
)是否意外创建了非正交的相机轴。您使用相机的 Y轴作为ChangeCameraYaw
中的输入和ChangeCameraPitch
中的输出。但 Y轴正在SetViewMatrix
中由交叉积或X轴和Z轴重置。因此,Y轴的早期值可能会丢失。
祝你的申请顺利!确定你是否找到了适当的解决方案和问题的根本原因。
答案 3 :(得分:1)
如上所述,即使点击现在有效,问题仍未完全解决。移动相机时模型失真的问题(我怀疑是相关的)仍然存在。我的意思是“模型变形”在下图中可见:
左图显示了当相机位于世界中心时模型的外观,即(0,0,0),而右图显示了当我在负y轴方向上移动相机时会发生什么。可以看出,模型在底部变宽,在顶部变小,这与我上面提到的link中描述的行为相同。
我最终解决这两个问题的方法是:
SetViewMatrix
方法的代码(代码见下文) SetViewMatrix
方法现在看起来如下:
void Camera::SetViewMatrix()
{
XMFLOAT3 cameraPosition;
XMFLOAT3 cameraXAxis;
XMFLOAT3 cameraYAxis;
XMFLOAT3 cameraZAxis;
XMFLOAT4X4 viewMatrix;
// Keep camera's axes orthogonal to each other and of unit length.
m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));
// m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
// to normalize the below cross product of the two.
m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);
// Fill in the view matrix entries.
float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));
//XMStoreFloat3(&cameraPosition, m_cameraPosition);
XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
XMStoreFloat3(&cameraZAxis, m_cameraZAxis);
viewMatrix(0, 0) = cameraXAxis.x;
viewMatrix(0, 1) = cameraXAxis.y;
viewMatrix(0, 2) = cameraXAxis.z;
viewMatrix(0, 3) = x;
viewMatrix(1, 0) = cameraYAxis.x;
viewMatrix(1, 1) = cameraYAxis.y;
viewMatrix(1, 2) = cameraYAxis.z;
viewMatrix(1, 3) = y;
viewMatrix(2, 0) = cameraZAxis.x;
viewMatrix(2, 1) = cameraZAxis.y;
viewMatrix(2, 2) = cameraZAxis.z;
viewMatrix(2, 3) = z;
viewMatrix(3, 0) = 0.0f;
viewMatrix(3, 1) = 0.0f;
viewMatrix(3, 2) = 0.0f;
viewMatrix(3, 3) = 1.0f;
m_modelViewProjectionConstantBufferData->view = viewMatrix;
}
所以我只是交换了行和列坐标。请注意,我必须确保在ChangeCameraYaw
方法之前调用ChangeCameraPitch
方法。这是必要的,因为模型的方向不是我想要的。
还有另一种方法可以使用。不是通过交换行和列坐标来转置视图矩阵并在将其传递给XMVector3Unproject之前转置它我可以在顶点着色器中使用row_major
关键字和视图矩阵:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
row_major matrix view;
matrix projection;
};
我在this博文中发现了这个想法。关键字row_major
会影响着色器编译器如何解释内存中的矩阵。通过改变顶点着色器中矢量*矩阵乘法的顺序也可以实现同样的效果,即使用pos = mul(view, pos);
而不是pos = mul(pos, view);
这就是它。这两个问题确实是相互关联的,但是使用我在这个问题中发布的内容,我能够解决这两个问题,所以我接受我自己的答复作为这个问题的答案。希望它能帮助将来的某个人。