HLSL custom bitpacking not working correctly

时间:2018-11-16 21:45:28

标签: directx directx-11 hlsl pixel-shader

I'm a bit of a noob with Directx but i have been trying to get this custom bitpacking working all day. I'm trying to pack a float4 and another float into a uint. The float4 is a color and the float is a depth value. I want to use 6 bits each for the color, and the remaining 8 bits for depth. Precision is not important. I thought I understood what i was doing, but when unpacking, it just keeps returning all zeros. Is this even possible? The pixel format is R16G16B16A16_FLOAT.

Here is the code

uint PackColorAndDepthToUint(float4 Color, float Depth)
{

    uint4 u = (int4) (Color * float4(255, 255, 255, 1.0f));
    uint packedOutput = (u.a << 26) | (u.b << 20) | (u.g << 14) | (u.r << 8) | asuint(Depth * 255);
    return packedOutput;
}

void UnpackColorAndDepthFromUint(uint packedInput, out float4 Color, out float Depth)
{
    uint d = (packedInput & 255);
    Depth = ((float) d / 255);
    uint r = (((packedInput >> 8)) & 64);
    uint g = (((packedInput >> 14)) & 64);
    uint b = (((packedInput >> 20)) & 64);
    uint a = (((packedInput >> 26)) & 64);
    uint4 co = uint4(r, g, b, a);

    Color = (((float4) co) / float4(255, 255, 255, 1.0f));
}

if anyone can point me in the right direction i would appreciate it!

0 个答案:

没有答案
相关问题