Kinect v2 + Unity - > MapDepthFrameToColorSpaceUsingIntPtr

时间:2014-09-18 14:47:52

标签: c# unity3d kinect

我试图将深度数据从KinectV2映射到Unity脚本中的颜色空间。 它使用通常的坐标映射器函数

按预期工作

_sensor.CoordinateMapper.MapDepthFrameToColorSpace(_depthData, _colorSpacePoints)

但它将我的帧率降低了11 - 这是不可接受的=)

因此,我查看了微软提供的统一示例并找到了一些内容 使用Pointer进行映射。

var pDepthData = GCHandle.Alloc(pDepthBuffer, GCHandleType.Pinned);
var pDepthCoordinatesData = GCHandle.Alloc(m_pDepthCoordinates, GCHandleType.Pinned);

m_pCoordinateMapper.MapColorFrameToDepthSpaceUsingIntPtr(
        pDepthData.AddrOfPinnedObject(), 
        (uint)pDepthBuffer.Length * sizeof(ushort),
        pDepthCoordinatesData.AddrOfPinnedObject(), 
        (uint)m_pDepthCoordinates.Length);

pDepthCoordinatesData.Free();
pDepthData.Free();

也存在满足我需求的等效方法。我试过指针版的 MapDepthFrameToColorSpace

var pDepthData = GCHandle.Alloc(pDepthBuffer, GCHandleType.Pinned);
var pColorData = GCHandle.Alloc(m_pColorSpacePoints, GCHandleType.Pinned);

m_pCoordinateMapper.MapDepthFrameToColorSpaceUsingIntPtr(
        pDepthData.AddrOfPinnedObject(), 
        pDepthBuffer.Length * sizeof(ushort),
        pColorData.AddrOfPinnedObject(),
        (uint)m_pColorSpacePoints.Length * sizeof(float) * 2);

pColorData.Free();
pDepthData.Free();

pDepthBuffer在方法调用时具有有效数据,并初始化m_pColorSpacePoints并且 与pDepthBuffer具有相同的长度(根据MSDN文档中的建议) 我还在使用1408 SDK版本。 函数之后的结果是具有Empty / NegativeInfinity Float值且无效的数组 ColorSpacePoints。没有错误消息。 有什么建议吗?

1 个答案:

答案 0 :(得分:0)

我得到了它的工作。更新到最新的SDK版本1409后,messageall返回错误消息

ArgumentException: Value does not fall within the expected range. Rethrow as ArgumentException: This API has returned an exception from an HRESULT: 0x80070057

所以我玩了一些参数,现在我得到了有效的数据。

以下是我对GreenScreen示例的CoordinateMapperManager的修改。我添加了一个缓冲区 对于映射的颜色值,通过深度值位置获取HD颜色流中的像素位置

//added declaration and initialization
private ColorSpacePoint[] m_pColorSpacePoints;

//in awake method
m_pColorSpacePoints = new ColorSpacePoint[pDepthBuffer.Length];

//accesor to the colorspacepoints
public ColorSpacePoint[] GetColorSpacePointBuffer()
{
  return m_pColorSpacePoints;
}

//the new process frame method
void ProcessFrame()
{
  var pDepthData = GCHandle.Alloc(pDepthBuffer, GCHandleType.Pinned);
  var pDepthCoordinatesData = GCHandle.Alloc(m_pDepthCoordinates, GCHandleType.Pinned);
  var pColorData = GCHandle.Alloc(m_pColorSpacePoints, GCHandleType.Pinned);

  m_pCoordinateMapper.MapColorFrameToDepthSpaceUsingIntPtr(
    pDepthData.AddrOfPinnedObject(), 
    (uint)pDepthBuffer.Length * sizeof(ushort),
    pDepthCoordinatesData.AddrOfPinnedObject(), 
    (uint)m_pDepthCoordinates.Length);


  m_pCoordinateMapper.MapDepthFrameToColorSpaceUsingIntPtr(
    pDepthData.AddrOfPinnedObject(),
    pDepthBuffer.Length * sizeof(ushort),
    pColorData.AddrOfPinnedObject(),
    (uint)m_pColorSpacePoints.Length);

  pColorData.Free();
  pDepthCoordinatesData.Free();
  pDepthData.Free();

  m_pColorRGBX.LoadRawTextureData(pColorBuffer);
  m_pColorRGBX.Apply ();
}

这里有一个如何使用它的例子。我使用Untiy在GPU着色器中进行渲染。下列 代码是一段缩短的未经测试的代码,用于演示如何获取数据:

//Size of the Kinect V2 Depth Stream
var _width = 512;
var _height = 424;
var _particleCount = _width * _height;

//Initialize an Array to store position data for particles
var _particleArray = new Vector3[_particleCount];
var _colorArray = new Color[_particleCount];

//Get Depth Data
var _depthData = _coordinateMapperManager.GetDepthPointBuffer();

//Get Mapped Color Space Points
var _colorSpacePoints = _coordinateMapperManager.GetColorSpacePointBuffer();

//Get the Colorstream as Texture
var _texture = _coordinateMapperManager.GetColorTexture();

var c = 0;
for (var i = 0; i < _height; ++i)
{
  for (var j = 0; j < _width; ++j)
  {
    _particleArray[c++] = new Vector3(j, i);
  }
}

for (var i = 0; i < _depthData.Length; ++i)
{
    _colorArray[i] = _texture.GetPixel((int)_colorSpacePoints[i].X, (int)_colorSpacePoints[i].Y);
    _particleArray[i].z = _depthData[i];
}

//Now we have all Data to build a colored 3D PointCloud