在C#中使用DirectX11和SlimDX的Nvidia 3D视频

时间:2012-06-30 19:36:06

标签: c# nvidia direct3d directx-11 slimdx

美好的一天, 我正在尝试使用nvidia 3DVision和两个IP摄像头显示实时立体视频。我对DirectX完全不熟悉,但已尝试在此网站和其他网站上完成一些教程和其他问题。现在,我正在为左眼和右眼显示两个静态位图。一旦我的程序部分工作,这些将被我的相机中的位图取代。 这个问题NV_STEREO_IMAGE_SIGNATURE and DirectX 10/11 (nVidia 3D Vision)给了我很多帮助,但我仍然在努力让我的程序正常工作。我发现我的快门眼镜开始正常工作,但只有右眼图像显示,而左眼仍然是空白(鼠标光标除外)。

以下是我生成立体图像的代码:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;

using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.Windows;
using SlimDX.DXGI;

using Device = SlimDX.Direct3D11.Device;            // Make sure we use DX11
using Resource = SlimDX.Direct3D11.Resource;

namespace SlimDxTest2
{
static class Program
{
    private static Device device;               // DirectX11 Device
    private static int Count;                   // Just to make sure things are being updated

    // The NVSTEREO header. 
    static byte[] stereo_data = new byte[] {0x4e, 0x56, 0x33, 0x44,   //NVSTEREO_IMAGE_SIGNATURE         = 0x4433564e; 
    0x00, 0x0F, 0x00, 0x00,                                           //Screen width * 2 = 1920*2 = 3840 = 0x00000F00; 
    0x38, 0x04, 0x00, 0x00,                                           //Screen height = 1080             = 0x00000438; 
    0x20, 0x00, 0x00, 0x00,                                           //dwBPP = 32                       = 0x00000020; 
    0x02, 0x00, 0x00, 0x00};                                          //dwFlags = SIH_SCALE_TO_FIT       = 0x00000002

    [STAThread]
    static void Main()
    {

        Bitmap left_im = new Bitmap("Blue.png");        // Read in Bitmaps
        Bitmap right_im = new Bitmap("Red.png");

        // Device creation 
        var form = new RenderForm("Stereo test") { ClientSize = new Size(1920, 1080) };
        var desc = new SwapChainDescription()
        {
            BufferCount = 1,
            ModeDescription = new ModeDescription(1920, 1080, new Rational(120, 1), Format.R8G8B8A8_UNorm),
            IsWindowed = false, //true,
            OutputHandle = form.Handle,
            SampleDescription = new SampleDescription(1, 0),
            SwapEffect = SwapEffect.Discard,
            Usage = Usage.RenderTargetOutput
        };

        SwapChain swapChain;
        Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);

        RenderTargetView renderTarget;          // create a view of our render target, which is the backbuffer of the swap chain we just created
        using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0))
            renderTarget = new RenderTargetView(device, resource);

        var context = device.ImmediateContext;                  // set up a viewport
        var viewport = new Viewport(0.0f, 0.0f, form.ClientSize.Width, form.ClientSize.Height);
        context.OutputMerger.SetTargets(renderTarget);
        context.Rasterizer.SetViewports(viewport);

        // prevent DXGI handling of alt+enter, which doesn't work properly with Winforms
        using (var factory = swapChain.GetParent<Factory>())
            factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);

        form.KeyDown += (o, e) =>                   // handle alt+enter ourselves
        {
            if (e.Alt && e.KeyCode == Keys.Enter)
                swapChain.IsFullScreen = !swapChain.IsFullScreen;
        };

        form.KeyDown += (o, e) =>                   // Alt + X -> Exit Program
        {
            if (e.Alt && e.KeyCode == Keys.X)
            {
                form.Close();
            }
        };

        context.ClearRenderTargetView(renderTarget, Color.Green);       // Fill Screen with specified colour

        Texture2DDescription stereoDesc = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 3840,
            Height = 1081,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };

        // Main Loop 
        MessagePump.Run(form, () =>
        {
            Texture2D texture_stereo =  Make3D(left_im, right_im);      // Create Texture from two bitmaps in memory
            ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };
            context.CopySubresourceRegion(texture_stereo, 0, stereoSrcBox, renderTarget.Resource, 0, 0, 0, 0);
            texture_stereo.Dispose();

            swapChain.Present(0, PresentFlags.None);
        });

        // Dispose resources 

        swapChain.IsFullScreen = false;     // Required before swapchain dispose
        device.Dispose();
        swapChain.Dispose();
        renderTarget.Dispose();

    }



    static Texture2D Make3D(Bitmap leftBmp, Bitmap rightBmp)
    {
        var context = device.ImmediateContext;
        Bitmap left2 = leftBmp.Clone(new RectangleF(0, 0, leftBmp.Width, leftBmp.Height), PixelFormat.Format32bppArgb);     // Change bmp to 32bit ARGB
        Bitmap right2 = rightBmp.Clone(new RectangleF(0, 0, rightBmp.Width, rightBmp.Height), PixelFormat.Format32bppArgb);

        // Show FrameCount on screen: (To test)
        Graphics left_graph = Graphics.FromImage(left2);
        left_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(100, 100));
        left_graph.Dispose();

        Graphics right_graph = Graphics.FromImage(right2);
        right_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(200, 200));
        right_graph.Dispose();
        Count++;

        Texture2DDescription desc2d = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 1920,
            Height = 1080,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };

        Texture2D leftText2 = new Texture2D(device, desc2d);        // Texture2D for each bmp
        Texture2D rightText2 = new Texture2D(device, desc2d);

        Rectangle rect = new Rectangle(0, 0, left2.Width, left2.Height);
        BitmapData leftData = left2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
        IntPtr left_ptr = leftData.Scan0;
        int left_num_bytes = Math.Abs(leftData.Stride) * leftData.Height;
        byte[] left_bytes = new byte[left_num_bytes];
        byte[] left_bytes2 = new byte[left_num_bytes];

        System.Runtime.InteropServices.Marshal.Copy(left_ptr, left_bytes, 0, left_num_bytes);       // Get Byte array from bitmap
        left2.UnlockBits(leftData);
        DataBox box1 = context.MapSubresource(leftText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box1.Data.Write(left_bytes, 0, left_bytes.Length);
        context.UnmapSubresource(leftText2, 0);

        BitmapData rightData = right2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
        IntPtr right_ptr = rightData.Scan0;
        int right_num_bytes = Math.Abs(rightData.Stride) * rightData.Height;
        byte[] right_bytes = new byte[right_num_bytes];

        System.Runtime.InteropServices.Marshal.Copy(right_ptr, right_bytes, 0, right_num_bytes);       // Get Byte array from bitmap
        right2.UnlockBits(rightData);
        DataBox box2 = context.MapSubresource(rightText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box2.Data.Write(right_bytes, 0, right_bytes.Length);
        context.UnmapSubresource(rightText2, 0);

        Texture2DDescription stereoDesc = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 3840,
            Height = 1081,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };
        Texture2D stereoTexture = new Texture2D(device, stereoDesc);    // Texture2D to contain stereo images and Nvidia 3DVision Signature

        // Identify the source texture region to copy (all of it) 
        ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };

        // Copy it to the stereo texture 
        context.CopySubresourceRegion(leftText2, 0, stereoSrcBox, stereoTexture, 0, 0, 0, 0);
        context.CopySubresourceRegion(rightText2, 0, stereoSrcBox, stereoTexture, 0, 1920, 0, 0);   // Offset by 1920 pixels

        // Open the staging texture for reading and go to last row
        DataBox box = context.MapSubresource(stereoTexture, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box.Data.Seek(stereoTexture.Description.Width * (stereoTexture.Description.Height - 1) * 4, System.IO.SeekOrigin.Begin);
        box.Data.Write(stereo_data, 0, stereo_data.Length);            // Write the NVSTEREO header 
        context.UnmapSubresource(stereoTexture, 0);

        left2.Dispose();
        leftText2.Dispose();
        right2.Dispose();
        rightText2.Dispose();
        return stereoTexture;
    } 

}

}

我尝试过各种方法将立体图像的Texture2D(包括签名(3840x1081))复制到后台缓冲区,但我试过的方法都没有显示两个图像...... 任何帮助或意见将不胜感激, 莱恩

4 个答案:

答案 0 :(得分:1)

如果选择使用DirectX11.1,则可以更轻松地启用立体功能,而无需依赖nVidia的字节魔法。基本上,您创建SwapChan1而不是常规SwapChain,然后就像将Stereo设置为True一样简单。

看一下我制作的这个post,它会告诉你如何创建一个Stereo swapChain。该代码是移植到MS自己的立体声样本的C#。然后你将有两个渲染目标,它更简单。在渲染之前,你必须:

void RenderEye(bool rightEye, ITarget target)
{
    RenderTargetView currentTarget = rightEye ? target.RenderTargetViewRight : target.RenderTargetView;
    context.OutputMerger.SetTargets(target.DepthStencilView, currentTarget);
    [clean color/depth]
    [render scene]
    [repeat for each eye]
}

其中ITarget是一个类的接口,提供对backbuffer,rendertargets等的访问。 就是这样,DirectX会照顾好一切。希望这会有所帮助。

答案 1 :(得分:0)

尝试创建宽度= 1920而不是3840的backbufer。 将每张图像拉伸到宽度的一半,并将它们并排放置。

答案 2 :(得分:0)

我记得在几天前在Nvidia开发者论坛上搜索时看到了同样的问题。不幸的是,由于最近的黑客攻击,论坛失败了。我记得该线程上的OP能够使用签名黑客使用DX11和Slimdx。你不使用stretchRectangle方法,它就像createResuroseRegion(),但不是我不记得的。可能是这些方法CopyResource()或CopySubresourceRegion()在堆栈上的这个类似线程中找到了流。 Copy Texture to Texture

答案 3 :(得分:0)

您是连续渲染图像还是至少渲染几次?我在DX9中做了同样的事情,并且在驾驶员将其识别为3D视觉之前必须告诉DX渲染3帧。你的眼镜开了吗?你的后备缓冲区=(宽度* 2),(高度+ 1)你是否正在编写后备缓冲区:

_________________________
|           |            |      
|  img1     |     img2   |
|           |            |
--------------------------
|_______signature________| where this last row = 1 pix tall