在DirectX 11 / HLSL

时间:2015-08-16 18:36:30

标签: c++ matrix directx directx-11 hlsl

我想从深度缓冲纹理中重建视图空间中的位置。我已设法将深度缓冲区着色器资源视图设置为着色器,我相信它没有问题。

我使用此公式计算屏幕上每个像素的视图空间中的像素位置:

Texture2D textures[4]; //color, normal (in view space), depth buffer (as texture), random
SamplerState ObjSamplerState;

cbuffer cbPerObject : register(b0) {
    float4 notImportant;
    float4 notImportant2;
    float2 notImportant3;
    float4x4 projectionInverted;
};

float3 getPosition(float2 textureCoordinates) {
    //textures[2] stores the depth buffer
    float depth = textures[2].Sample(ObjSamplerState, textureCoordinates).r;
    float3 screenPos = float3(textureCoordinates.xy* float2(2, -2) - float2(1, -1), 1 - depth);
    float4 wpos = mul(float4(screenPos, 1.0f), projectionInverted);
    wpos.xyz /= wpos.w;
    return wpos.xyz;
}

,但它给了我错误的结果:

enter image description here

我在CPU上以这种方式计算反向投影矩阵并将其传递给像素着色器:

ConstantBuffer2DStructure cbPerObj;
DirectX::XMFLOAT4X4 projection = camera->getProjection();
DirectX::XMMATRIX camProjection = XMLoadFloat4x4(&projection);
camProjection = XMMatrixTranspose(camProjection);
DirectX::XMVECTOR det; DirectX::XMMATRIX projectionInverted = XMMatrixInverse(&det, camProjection);
cbPerObj.projectionInverted = projectionInverted;
...
context->UpdateSubresource(constantBuffer, 0, NULL, &cbPerObj, 0, 0);
context->PSSetConstantBuffers(0, 1, &constantBuffer);

我知道这个顶点着色器的计算是可以的(所以我猜myCamera->getProjection()会返回好的结果):

DirectX::XMFLOAT4X4 view = myCamera->getView();
DirectX::XMMATRIX camView = XMLoadFloat4x4(&view);
DirectX::XMFLOAT4X4 projection = myCamera->getProjection();
DirectX::XMMATRIX camProjection = XMLoadFloat4x4(&projection);
DirectX::XMMATRIX worldViewProjectionMatrix = objectWorldMatrix * camView * camProjection;

constantsPerObject.worldViewProjection = XMMatrixTranspose(worldViewProjectionMatrix);
constantsPerObject.world = XMMatrixTranspose(objectWorldMatrix);
constantsPerObject.view = XMMatrixTranspose(camView);

但也许我以错误的方式计算了倒置投影矩阵?或者我是否犯了另一个错误?

修改

当发现@NicoSchertler时,着色器中的1 - depth部分是错误的。我已将其更改为depth - 1并对纹理格式等进行了一些细微更改。我现在有这样的结果:

enter image description here

请注意,这是针对不同摄像机的角度(因为我不再使用早期的角度)。这是参考 - 视图空间中的法线:

enter image description here

它看起来有点好,但是可以吗?它看起来很奇怪而且不是很平滑。这是一个精确的问题吗?

编辑2

由于@NicoSchertler建议DirectX中的深度缓冲区应使用[0...1]范围。所以我已将depth - 1更改为depth以获得:

float depth = textures[2].Sample(ObjSamplerState, textureCoordinates).r;
float3 screenPos = float3(textureCoordinates.xy* float2(2, -2) - float2(1, -1), depth);// -1); //<-- the change
float4 wpos = mul(float4(screenPos, 1.0f), projectionInverted);
wpos.xyz /= wpos.w;
return wpos.xyz;

但我得到了那个结果:

enter image description here

0 个答案:

没有答案