一开始,我只需要捕获 RGBstream 并按照 opencv image 的顺序进行转换。它不应该那么难,但我发现更多的是在线编码,但它们不能在我的电脑中运行。我不知道哪里出错了。
你能给我一个教程吗?一个非常简单的代码,让我了解如何使用Kinect库? 一开始我尝试了Kinect sdk,过了一会儿选择 OPENNI 。
帮助我,谢谢!
ps:我正在使用 c ++ 和 VISUAL STUDIO 2010
答案 0 :(得分:2)
AFAIK开箱即用OpenCV支持OpenNI 1.5.x. 如果您还没有安装OpenNI,请先按此特定顺序执行此操作(这很重要):
此时你应该安装OpenNI,所以继续运行其中一个样本。
默认情况下,预构建的opencv库未经OpenCV支持编译,因此您需要从源代码构建opencv以启用OpenNI支持。
安装CMakeGUI如果您还没有这样做的话。这将允许您轻松配置opencv构建过程。运行它,浏览到opencv源文件夹,选择一个目标目录来放置您的构建文件,然后点击configure。
你应该有一个很大的选项列表。如果你滚动,你应该看到检测到OpenNI安装文件夹(如果不是你应该修复路径),你应该和WITH_OPENNI
标记你可以启用。
当您完成按生成时,应该生成您需要的visual studio项目文件,以便轻松编译opencv库。
有关在Windows上从源构建opencv的更多详细信息,请查看official documentation
当你完成编译时,你应该使用openni支持构建opencv,你应该可以运行简单的东西:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(){
cout << "opening device(s)" << endl;
VideoCapture sensor1;
sensor1.open(CV_CAP_OPENNI);
if( !sensor1.isOpened() ){
cout << "Can not open capture object 1." << endl;
return -1;
}
for(;;){
Mat depth1;
if( !sensor1.grab() ){
cout << "Sensor1 can not grab images." << endl;
return -1;
}else if( sensor1.retrieve( depth1, CV_CAP_OPENNI_DEPTH_MAP ) ) imshow("depth1",depth1);
if( waitKey( 30 ) == 27 ) break;//ESC to exit
}
}
另见this similar answer。 如果您需要使用OpenNI 2.x,请参阅以下资源:
答案 1 :(得分:2)
以下是我认为使用kinect与OpenCV最简单有效的方法。
该代码允许您连续收集深度和彩色图像,但如果您只想要其中一个流,则可以免费打开其他流。
以下是使用C ++ OpenCV API和Mat对象的代码:
#include <openni2/OpenNI.h>
#include <opencv2/opencv.hpp>
using namespace openni;
main()
{
OpenNI::initialize();
puts( "Kinect initialization..." );
Device device;
if ( device.open( openni::ANY_DEVICE ) != 0 )
{
puts( "Kinect not found !" );
return -1;
}
puts( "Kinect opened" );
VideoStream depth, color;
color.create( device, SENSOR_COLOR );
color.start();
puts( "Camera ok" );
depth.create( device, SENSOR_DEPTH );
depth.start();
puts( "Depth sensor ok" );
VideoMode paramvideo;
paramvideo.setResolution( 640, 480 );
paramvideo.setFps( 30 );
paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
depth.setVideoMode( paramvideo );
paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
color.setVideoMode( paramvideo );
puts( "Réglages des flux vidéos ok" );
// If the depth/color synchronisation is not necessary, start is faster :
device.setDepthColorSyncEnabled( false );
// Otherwise, the streams can be synchronized with a reception in the order of our choice :
//device.setDepthColorSyncEnabled( true );
//device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );
VideoStream** stream = new VideoStream*[2];
stream[0] = &depth;
stream[1] = &color;
puts( "Kinect initialization completed" );
if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
{
VideoFrameRef depthFrame, colorFrame;
cv::Mat colorcv( cv::Size( 640, 480 ), CV_8UC3, NULL );
cv::Mat depthcv( cv::Size( 640, 480 ), CV_16UC1, NULL );
cv::namedWindow( "RGB", CV_WINDOW_AUTOSIZE );
cv::namedWindow( "Depth", CV_WINDOW_AUTOSIZE );
int changedIndex;
while( device.isValid() )
{
OpenNI::waitForAnyStream( stream, 2, &changedIndex );
switch ( changedIndex )
{
case 0:
depth.readFrame( &depthFrame );
if ( depthFrame.isValid() )
{
depthcv.data = (uchar*) depthFrame.getData();
cv::imshow( "Depth", depthcv );
}
break;
case 1:
color.readFrame( &colorFrame );
if ( colorFrame.isValid() )
{
colorcv.data = (uchar*) colorFrame.getData();
cv::cvtColor( colorcv, colorcv, CV_BGR2RGB );
cv::imshow( "RGB", colorcv );
}
break;
default:
puts( "Error retrieving a stream" );
}
cv::waitKey( 1 );
}
cv::destroyWindow( "RGB" );
cv::destroyWindow( "Depth" );
}
depth.stop();
depth.destroy();
color.stop();
color.destroy();
device.close();
OpenNI::shutdown();
}
对于那些喜欢使用OpenCV的C API和IplImage结构的人:
#include <openni2/OpenNI.h>
#include <opencv/cv.h>
#include <opencv/highgui.h>
using namespace openni;
main()
{
OpenNI::initialize();
puts( "Kinect initialization..." );
Device device;
if ( device.open( openni::ANY_DEVICE ) != 0 )
{
puts( "Kinect not found !" );
return -1;
}
puts( "Kinect opened" );
VideoStream depth, color;
color.create( device, SENSOR_COLOR );
color.start();
puts( "Camera ok" );
depth.create( device, SENSOR_DEPTH );
depth.start();
puts( "Depth sensor ok" );
VideoMode paramvideo;
paramvideo.setResolution( 640, 480 );
paramvideo.setFps( 30 );
paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
depth.setVideoMode( paramvideo );
paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
color.setVideoMode( paramvideo );
puts( "Réglages des flux vidéos ok" );
// If the depth/color synchronisation is not necessary, start is faster :
device.setDepthColorSyncEnabled( false );
// Otherwise, the streams can be synchronized with a reception in the order of our choice :
//device.setDepthColorSyncEnabled( true );
//device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );
VideoStream** stream = new VideoStream*[2];
stream[0] = &depth;
stream[1] = &color;
puts( "Kinect initialization completed" );
if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
{
VideoFrameRef depthFrame, colorFrame;
IplImage* colorcv = cvCreateImageHeader( cvSize( 640, 480 ), IPL_DEPTH_8U, 3 );
IplImage* depthcv = cvCreateImageHeader( cvSize( 640, 480 ), IPL_DEPTH_16U, 1 );
cvNamedWindow( "RGB", CV_WINDOW_AUTOSIZE );
cvNamedWindow( "Depth", CV_WINDOW_AUTOSIZE );
int changedIndex;
while( device.isValid() )
{
OpenNI::waitForAnyStream( stream, 2, &changedIndex );
switch ( changedIndex )
{
case 0:
depth.readFrame( &depthFrame );
if ( depthFrame.isValid() )
{
depthcv->imageData = (char*) depthFrame.getData();
cvShowImage( "Depth", depthcv );
}
break;
case 1:
color.readFrame( &colorFrame );
if ( colorFrame.isValid() )
{
colorcv->imageData = (char*) colorFrame.getData();
cvCvtColor( colorcv, colorcv, CV_BGR2RGB );
cvShowImage( "RGB", colorcv );
}
break;
default:
puts( "Error retrieving a stream" );
}
cvWaitKey( 1 );
}
cvReleaseImageHeader( &colorcv );
cvReleaseImageHeader( &depthcv );
cvDestroyWindow( "RGB" );
cvDestroyWindow( "Depth" );
}
depth.stop();
depth.destroy();
color.stop();
color.destroy();
device.close();
OpenNI::shutdown();
}
我希望它最有用。
享受!
PS:抱歉我的英语不好,我是法国人......;)