Kinect深度检测

时间:2014-01-05 09:30:21

标签: c# winforms kinect

我知道如何在WPF中执行此操作,但我在winforms应用程序中捕获深度存在问题。

我找到了一些代码如下:

private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
 {
     using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
    {
        if (depthFrame != null)
        {
            Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
            if (_depthPixels.Length != depthFrame.PixelDataLength)
            {
                _depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
                _mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
            }

            //Copy the depth frame data onto the bitmap  
            var _pixelData = new short[depthFrame.PixelDataLength];
            depthFrame.CopyPixelDataTo(_pixelData);
            BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
            depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
            IntPtr ptr = bmapdata.Scan0;
            Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
            DepthBitmap.UnlockBits(bmapdata);                       
            pictureBox2.Image = DepthBitmap;
        }              
    }
  }

但这并没有给我greyScale深度而且它是紫色的。有任何改进或帮助吗?

3 个答案:

答案 0 :(得分:1)

我自己通过转换深度框架的函数找到了解决方案:

void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
    {             
            using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
            {
                if (depthFrame != null)
                {                                              
                    this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
                    //Update the image to the new format
                    this.depthPixelData = new short[depthFrame.PixelDataLength];
                    depthFrame.CopyPixelDataTo(this.depthPixelData);
                    byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
                    Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
                    BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
                    IntPtr ptr = bmapdata.Scan0;
                    Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
                    bmap.UnlockBits(bmapdata);
                    pictureBox2.Image = bmap;


                }
            }
        }     

  private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
    {
        //Run through the depth frame making the correlation between the two arrays
        for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
        {
            // Console.WriteLine(i16 + "," + i32);
            //We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
            int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
            //We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
            //There are hundreds of ways to do this. This is just the simplest one.
            //Lets create a byte variable called Distance.
            //We will assign this variable a number that will come from the conversion of those 13 bits.
            byte Distance = 0;
            //XBox Kinects (default) are limited between 800mm and 4096mm.
            int MinimumDistance = 800;
            int MaximumDistance = 4096;
            //XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
            //If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
            if (realDepth > MinimumDistance)
            {
                //Convert the realDepth into the 0 to 255 range for our actual distance.
                //Use only one of the following Distance assignments
                //White = Far
                //Black = Close
                //Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
                //White = Close
                //Black = Far
                Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
                //Use the distance to paint each layer (R G &  of the current pixel.
                //Painting R, G and B with the same color will make it go from black to gray
                this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
                this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
                this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
            }
            //If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
            else
            {
                this.depthFrame32[i32 + RedIndex] = 0;
                this.depthFrame32[i32 + GreenIndex] = 0;
                this.depthFrame32[i32 + BlueIndex] = 0;
            }
        }

答案 1 :(得分:0)

所以我认为在这种情况下rgb框架正在为你服务:

首先启用您需要调用的深度摄像头:

sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);

第二个开始流媒体,你需要打电话:

if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
    NUI_IMAGE_TYPE_DEPTH,                   // Depth camera or rgb camera?
    NUI_IMAGE_RESOLUTION_640x480,           // Image resolution
    NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
    2,                                      // Number of frames to buffer
    NULL,                                   // Event handle
    &stream_hzed); else stream_hzed=NULL;
  • 请注意并非所有分辨率/标志组合都适用于所有型号的kinect !!!
  • 上面的这个是安全的,即使是像我这样的老款

这是我捕获帧的方式(从定时器或线程循环重复调用)

ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
    {
    // copy data from frame
    imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
    if (LockedRect.Pitch!=0)
        {
        const BYTE* curr = (const BYTE*) LockedRect.pBits;
        union _col { BYTE u8[2]; WORD u16; } col;
        col.u16=0;
        pnt3d p;
        long ax,ay;
        float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
        for(int x=0,y=0;;)
            {
            col.u8[0]=*curr; curr++;
            col.u8[1]=*curr; curr++;
            p.raw=col.u16;
            p.rgb=&rgb_default;
                 if (p.raw==0x0000) p.z=0.0;    // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
            else if (p.raw>=0x8000) p.z=4.0;
            else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
            // depth FOV correction
            p.x=zx[x]*p.z;
            p.y=zy[y]*p.z;
            // color FOV correction     zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
            if (p.z>0.0)
                {
                ax=(((x+10-xs2)*241)>>8)+xs2;   // cameras x-offset and different FOV
                ay=(((y+30-ys2)*240)>>8)+ys2;   // cameras y-offset??? and different FOV
                if ((ax>=0)&&(ax<xs))
                if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
                }
            xyz[y][x]=p;
            x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
            }
        }
    // release frame
    imageFrame.pFrameTexture->UnlockRect(0);
    ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
    stream_changed|=_Kinect_zed;
    }

对不完整的源代码抱歉... - 所有都是从我的kinect类复制粘贴(BDS2006 Turbo C ++) - 所以如果你不忘记的话,你需要检查你的代码 - 如果是,那么将我的代码转换为C#(我不是C#用户) - 很可能你忘记了NUIinitialize with depth flag - 或为HW设置无效的分辨率/标志/精度或帧速率

如果什么都不起作用,那么你需要首先初始化传感器

int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;

如果您自己链接到dll,则仅链接这些功能:

typedef HRESULT(__stdcall *_NuiGetSensorCount     )(int * pCount);                        _NuiGetSensorCount      NUIGetSensorCount     =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor);  _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
  • 所有其他功能(必须)是通过SDK标头内的COM获得的!
  • 如果您自己链接和使用它们,那么您将无法连接到您的物理Kinect !!!

答案 2 :(得分:0)

基本上kinect sdk是为WPf应用程序开发的。在Windows窗体中,您已将深度数据的短数组转换为BItmap以在picturebox上显示它。基于我的实验,WPF更适合用kinect进行编程。

下面是我用来将深度帧转换为位图以便在图片框中显示的功能。

 private Bitmap ImageToBitmap(DepthImageFrame Image)
    {
        short[] pixeldata = new short[Image.PixelDataLength];
        int stride = Image.Width * 2;
        Image.CopyPixelDataTo(pixeldata);
        Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
        BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
        IntPtr ptr = bmapdata.Scan0;
        Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
        bmap.UnlockBits(bmapdata);
        return bmap;
    }

您可以这样称呼:

DepthImageFrame VFrame = e.OpenDepthImageFrame();
            if (VFrame == null) return;
            short[] pixelS = new short[VFrame.PixelDataLength];
            Bitmap bmap = ImageToBitmap(VFrame);