Direct3D11:在3D场景中渲染2D:当摄像机改变位置时,如何使2D对象不在Viewport上移动?

时间:2016-09-20 20:46:59

标签: c++ 3d 2d directx direct3d11

包含问题示例的图片:http://imgur.com/gallery/vmMyk

您好, 我需要一些帮助,用3D相机在3D场景中渲染2D对象。我想我设法用LH世界坐标解决了2D坐标。但是,只有当相机处于[0.0f, 0.0f, 0.0f]坐标时,我渲染的2D对象才处于正确的位置。在每个其他位置,场景中2D对象的位置都是格式错误的。我认为我的矩阵搞砸了,但不知道在哪里可以看得更远。我很欣赏好主意,如果您遗失了某些内容,请发表评论,我会编辑主帖以便为您提供更多信息。

我正在使用简单的3D彩色HLSL(VS和PS版本:4.0)着色器,使用alpha混合更大的三角形:

cbuffer ConstantBuffer : register( b0 )
{
    matrix World;
    matrix View;
    matrix Projection;
}
struct VS_INPUT
{
    float4 Pos : POSITION;
    float4 Color : COLOR;
};
struct PS_INPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR;
};

PS_INPUT VS ( VS_INPUT input )
{
    PS_INPUT output = (PS_INPUT)0;

    input.Pos.w = 1.0f;

    output.Pos = mul ( input.Pos, World );
    output.Pos = mul ( output.Pos, View );
    output.Pos = mul ( output.Pos, Projection );
    output.Color = input.Color;

    return output;
}

float4 PS ( PS_INPUT input ) : SV_Target
{
    return input.Color;
}

这是我的Vertex数据结构:

  struct Vertex
  {
    DirectX::XMFLOAT3 position;
    DirectX::XMFLOAT4 color;

    Vertex() {};

    Vertex(DirectX::XMFLOAT3 aPosition, DirectX::XMFLOAT4 aColor) 
      : position(aPosition)
      , color(aColor) 
    {};
  };

渲染对象的调用:

bool PrimitiveMesh::Draw()
{
  unsigned int stride = sizeof(Vertex);
  unsigned int offset = 0;

  D3DSystem::GetD3DDeviceContext()->IASetVertexBuffers(0, 1, &iVertexBuffer, &stride, &offset);
  D3DSystem::GetD3DDeviceContext()->IASetIndexBuffer(iIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
  D3DSystem::GetD3DDeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

  return true;
}

使用初始化绘制调用:

    static PrimitiveMesh* mesh;
    if (mesh == 0)
    {
      std::vector<PrimitiveMesh::Vertex> vertices;
      mesh = new PrimitiveMesh();

      DirectX::XMFLOAT4 color = { 186 / 256.0f, 186 / 256.0f, 186 / 256.0f, 0.8f };
      vertices.push_back({ DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f), color });
      vertices.push_back({ DirectX::XMFLOAT3(0.0f, 600.0f, 0.0f), color });
      vertices.push_back({ DirectX::XMFLOAT3(800.0f, 600.0f, 0.0f), color });

      mesh->SetVerticesAndIndices(vertices);
    }
    // Getting clean matrices here:
    D3D::Matrices(world, view, projection, ortho);
    iGI->TurnZBufferOff();
    iGI->TurnOnAlphaBlending();
    mesh->Draw();
    XMMATRIX view2D = Camera::View2D();

    iColorShader->Render(iGI->GetContext(), 3, &world, &view2D, &ortho);
    iGI->TurnZBufferOn();

这些是我对相机的2D计算:

  up = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
  lookAt = DirectX::XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f);
  rotationMatrix = DirectX::XMMatrixRotationRollPitchYaw(0.0f, 0.0f, 0.0f); // (pitch, yaw, roll);

  up = DirectX::XMVector3TransformCoord(up, rotationMatrix);
  lookAt = DirectX::XMVector3TransformCoord(lookAt, rotationMatrix) + position;
  view2D = DirectX::XMMatrixLookAtLH(position, lookAt, up);

我会感激任何帮助。 亲切的问候。

3 个答案:

答案 0 :(得分:1)

使用着色器,您不会被迫使用矩阵,您可以灵活地简化问题。

假设您使用像素坐标渲染2d对象,唯一的要求是缩放它们将它们偏移回规范化的投影空间。

顶点着色器可以简短:

cbuffer ConstantBuffer : register( b0 ) {
    float2 rcpDim; // 1 / renderTargetSize
}
PS_INPUT VS ( VS_INPUT input ) {
    PS_INPUT output;

    output.Pos.xy = input.Pos.xy * rcpDim * 2; // from pixel to [0..2]
    output.Pos.xy -= 1; // to [-1..1]
    output.Pos.y *= -1; // because top left in texture space is bottom left in projective space
    output.Pos.zw = float2(0,1);
    output.Color = input.Color;
    return output;
}

您当然可以构建一组与原始着色器实现相同结果的矩阵,只需将World和View设置为标识并投影到XMMatrixOrthographicOffCenterLH(0,width,0,height,0,1)的正投影。但是当你开始使用3D编程时,你很快就会学会处理多个着色器,所以把它作为一个练习。

答案 1 :(得分:0)

好吧,我解决了我的问题。出于某些奇怪的原因,DirectXMath生成了错误的XMMATRIX。我的XMMatrixOrtographicLH()对于良好的参数完全不正确。我用Ortohraphic矩阵的经典定义解决了我的问题,found in this article(图10中的定义)

auto orthoMatrix = DirectX::XMMatrixIdentity();
orthoMatrix.r[0].m128_f32[0] = 2.0f / Engine::VideoSettings::Current()->WindowWidth();
orthoMatrix.r[1].m128_f32[1] = 2.0f / Engine::VideoSettings::Current()->WindowHeight();
orthoMatrix.r[2].m128_f32[2] = -(2.0f / (screenDepth - screenNear));
orthoMatrix.r[2].m128_f32[3] = -(screenDepth + screenNear) / (screenDepth - screenNear);

答案 2 :(得分:0)

galop1n在我的系统上提供了一个很好的解决方案

cbuffer ConstantBuffer : register( b0 ) { float2 rcpDim; // 1 / renderTargetSize }

需要成为16的倍数才能像这里一样:

struct VS_CONSTANT_BUFFER
{
    DirectX::XMFLOAT2 rcpDim;
    DirectX::XMFLOAT2 rcpDim2;
};

// Supply the vertex shader constant data.
VS_CONSTANT_BUFFER VsConstData;
VsConstData.rcpDim = { 2.0f / w,2.0f / h};

// Fill in a buffer description.
D3D11_BUFFER_DESC cbDesc;
ZeroMemory(&cbDesc, sizeof(cbDesc));
cbDesc.ByteWidth = sizeof(VS_CONSTANT_BUFFER);
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;

// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = &VsConstData;
InitData.SysMemPitch = 0;
InitData.SysMemSlicePitch = 0;

// Create the buffer.
HRESULT hr = pDevice->CreateBuffer(&cbDesc, &InitData,
    &pConstantBuffer11);

或对齐

__declspec(align(16))
struct VS_CONSTANT_BUFFER
{
    DirectX::XMFLOAT2 rcpDim;
};