记录Kinect流以供以后播放的最可靠方法是什么?

时间:2011-06-22 04:40:44

标签: kinect processing cinder

我一直在使用Processing和Cinder来动态修改Kinect输入。但是,我还想记录整个流(深度+颜色+加速度计值,以及其中的任何其他内容)。我正在录音,所以我可以在相同的材料上尝试不同的效果/治疗。

因为我还在学习Cinder并且处理速度很慢/很迟,所以我很难找到关于捕获流的策略的建议 - 任何事情(最好是在Cinder,oF或Processing中)都会非常有用。

1 个答案:

答案 0 :(得分:3)

我已经尝试过Processing和OpenFrameworks。显示两个图像(深度和颜色)时处理速度较慢。将数据写入磁盘时,OpenFrameworks会慢一点,但这是基本方法:

  1. Setup Openframeworks(打开并编译任何样本以确保您已启动并运行)
  2. 下载ofxKinect addon并按照github上的说明复制示例项目。
  3. 一旦你有OF和ofxKinect示例运行,只需要添加一些变量来保存你的数据:
  4. 在这个基本设置中,我创建了几个图像实例和一个用于切换保存的布尔值。在示例中,深度和RGB缓冲区被保存到xCvGrayscaleImage实例中,但我没有使用OF和OpenCV足以知道如何做一些简单的操作将图像保存到磁盘,这就是为什么我使用了两个{{3实例。

    我不知道你对Processing,OF,Cinder多么自在,所以,为了论证,我会假设你知道你在处理这个问题,但你仍然在处理C ++。

    OF与Processing非常相似,但存在一些差异:

    1. 在处理过程中,您有变量声明,它们在同一文件中使用。在你有一个.h文件,你声明你是变量和.cpp文件,你初始化和使用你的变量。
    2. 在Processing中你有setup()(初始化变量)和draw()(更新变量和绘制到屏幕)方法,而在OF中你有setup()(与Processing相同),update()(更新变量)只有,没有视觉效果)和draw()(使用更新的值绘制到屏幕)
    3. 使用图像时,由于您使用C ++进行编码,因此需要首先分配内存,而不是使用内存管理的Processing / Java。
    4. 我在这里详细介绍了更多不同之处。请查看维基上的ofImage

      回到exampleKinect示例,这里是我的基本设置:

      .h文件:

      #pragma once
      
      #include "ofMain.h"
      #include "ofxOpenCv.h"
      #include "ofxKinect.h"
      
      class testApp : public ofBaseApp {
          public:
      
              void setup();
              void update();
              void draw();
              void exit();
      
              void drawPointCloud();
      
              void keyPressed  (int key);
              void mouseMoved(int x, int y );
              void mouseDragged(int x, int y, int button);
              void mousePressed(int x, int y, int button);
              void mouseReleased(int x, int y, int button);
              void windowResized(int w, int h);
      
              ofxKinect kinect;
      
              ofxCvColorImage     colorImg;
      
              ofxCvGrayscaleImage     grayImage;
              ofxCvGrayscaleImage     grayThresh;
              ofxCvGrayscaleImage     grayThreshFar;
      
              ofxCvContourFinder  contourFinder;
      
              ofImage             colorData;//to save RGB data to disk
              ofImage             grayData;//to save depth data to disk 
      
              bool                bThreshWithOpenCV;
              bool                drawPC;
              bool                saveData;//save to disk toggle
      
              int                 nearThreshold;
              int                 farThreshold;
      
              int                 angle;
      
              int                 pointCloudRotationY;
              int                 saveCount;//counter used for naming 'frames'
      };
      

      和.cpp文件:

      #include "testApp.h"
      
      
      //--------------------------------------------------------------
      void testApp::setup() {
          //kinect.init(true);  //shows infrared image
          kinect.init();
          kinect.setVerbose(true);
          kinect.open();
      
          colorImg.allocate(kinect.width, kinect.height);
          grayImage.allocate(kinect.width, kinect.height);
          grayThresh.allocate(kinect.width, kinect.height);
          grayThreshFar.allocate(kinect.width, kinect.height);
          //allocate memory for these ofImages which will be saved to disk
          colorData.allocate(kinect.width, kinect.height, OF_IMAGE_COLOR);
          grayData.allocate(kinect.width, kinect.height, OF_IMAGE_GRAYSCALE);
      
          nearThreshold = 230;
          farThreshold  = 70;
          bThreshWithOpenCV = true;
      
          ofSetFrameRate(60);
      
          // zero the tilt on startup
          angle = 0;
          kinect.setCameraTiltAngle(angle);
      
          // start from the front
          pointCloudRotationY = 180;
      
          drawPC = false;
      
          saveCount = 0;//init frame counter
      }
      
      //--------------------------------------------------------------
      void testApp::update() {
          ofBackground(100, 100, 100);
      
          kinect.update();
          if(kinect.isFrameNew()) // there is a new frame and we are connected
          {
      
              grayImage.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height);
      
              if(saveData){
                  //if toggled, set depth and rgb pixels to respective ofImage, save to disk and update the 'frame' counter 
                  grayData.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height,true);
                  colorData.setFromPixels(kinect.getCalibratedRGBPixels(), kinect.width, kinect.height,true);
                  grayData.saveImage("depth"+ofToString(saveCount)+".png");
                  colorData.saveImage("color"+ofToString(saveCount)+".png");
                  saveCount++;
              }
      
              //we do two thresholds - one for the far plane and one for the near plane
              //we then do a cvAnd to get the pixels which are a union of the two thresholds. 
              if( bThreshWithOpenCV ){
                  grayThreshFar = grayImage;
                  grayThresh = grayImage;
                  grayThresh.threshold(nearThreshold, true);
                  grayThreshFar.threshold(farThreshold);
                  cvAnd(grayThresh.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);
              }else{
      
                  //or we do it ourselves - show people how they can work with the pixels
      
                  unsigned char * pix = grayImage.getPixels();
                  int numPixels = grayImage.getWidth() * grayImage.getHeight();
      
                  for(int i = 0; i < numPixels; i++){
                      if( pix[i] < nearThreshold && pix[i] > farThreshold ){
                          pix[i] = 255;
                      }else{
                          pix[i] = 0;
                      }
                  }
              }
      
              //update the cv image
              grayImage.flagImageChanged();
      
              // find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
              // also, find holes is set to true so we will get interior contours as well....
              contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);
          }
      }
      
      //--------------------------------------------------------------
      void testApp::draw() {
          ofSetColor(255, 255, 255);
          if(drawPC){
              ofPushMatrix();
              ofTranslate(420, 320);
              // we need a proper camera class
              drawPointCloud();
              ofPopMatrix();
          }else{
              kinect.drawDepth(10, 10, 400, 300);
              kinect.draw(420, 10, 400, 300);
      
              grayImage.draw(10, 320, 400, 300);
              contourFinder.draw(10, 320, 400, 300);
          }
      
      
          ofSetColor(255, 255, 255);
          stringstream reportStream;
          reportStream << "accel is: " << ofToString(kinect.getMksAccel().x, 2) << " / "
                                       << ofToString(kinect.getMksAccel().y, 2) << " / " 
                                       << ofToString(kinect.getMksAccel().z, 2) << endl
                       << "press p to switch between images and point cloud, rotate the point cloud with the mouse" << endl
                       << "using opencv threshold = " << bThreshWithOpenCV <<" (press spacebar)" << endl
                       << "set near threshold " << nearThreshold << " (press: + -)" << endl
                       << "set far threshold " << farThreshold << " (press: < >) num blobs found " << contourFinder.nBlobs
                          << ", fps: " << ofGetFrameRate() << endl
                       << "press c to close the connection and o to open it again, connection is: " << kinect.isConnected() << endl
                       << "press s to toggle saving depth and color data. currently saving:  " << saveData << endl
                       << "press UP and DOWN to change the tilt angle: " << angle << " degrees";
          ofDrawBitmapString(reportStream.str(),20,656);
      }
      
      void testApp::drawPointCloud() {
          ofScale(400, 400, 400);
          int w = 640;
          int h = 480;
          ofRotateY(pointCloudRotationY);
          float* distancePixels = kinect.getDistancePixels();
          glBegin(GL_POINTS);
          int step = 2;
          for(int y = 0; y < h; y += step) {
              for(int x = 0; x < w; x += step) {
                  ofPoint cur = kinect.getWorldCoordinateFor(x, y);
                  ofColor color = kinect.getCalibratedColorAt(x,y);
                  glColor3ub((unsigned char)color.r,(unsigned char)color.g,(unsigned char)color.b);
                  glVertex3f(cur.x, cur.y, cur.z);
              }
          }
          glEnd();
      }
      
      //--------------------------------------------------------------
      void testApp::exit() {
          kinect.setCameraTiltAngle(0); // zero the tilt on exit
          kinect.close();
      }
      
      //--------------------------------------------------------------
      void testApp::keyPressed (int key) {
          switch (key) {
              case ' ':
                  bThreshWithOpenCV = !bThreshWithOpenCV;
              break;
              case'p':
                  drawPC = !drawPC;
                  break;
      
              case '>':
              case '.':
                  farThreshold ++;
                  if (farThreshold > 255) farThreshold = 255;
                  break;
              case '<':       
              case ',':       
                  farThreshold --;
                  if (farThreshold < 0) farThreshold = 0;
                  break;
      
              case '+':
              case '=':
                  nearThreshold ++;
                  if (nearThreshold > 255) nearThreshold = 255;
                  break;
              case '-':       
                  nearThreshold --;
                  if (nearThreshold < 0) nearThreshold = 0;
                  break;
              case 'w':
                  kinect.enableDepthNearValueWhite(!kinect.isDepthNearValueWhite());
                  break;
              case 'o':
                  kinect.setCameraTiltAngle(angle);   // go back to prev tilt
                  kinect.open();
                  break;
              case 'c':
                  kinect.setCameraTiltAngle(0);       // zero the tilt
                  kinect.close();
                  break;
              case 's'://s to toggle saving data
                  saveData = !saveData;
                  break;
      
              case OF_KEY_UP:
                  angle++;
                  if(angle>30) angle=30;
                  kinect.setCameraTiltAngle(angle);
                  break;
      
              case OF_KEY_DOWN:
                  angle--;
                  if(angle<-30) angle=-30;
                  kinect.setCameraTiltAngle(angle);
                  break;
          }
      }
      
      //--------------------------------------------------------------
      void testApp::mouseMoved(int x, int y) {
          pointCloudRotationY = x;
      }
      
      //--------------------------------------------------------------
      void testApp::mouseDragged(int x, int y, int button)
      {}
      
      //--------------------------------------------------------------
      void testApp::mousePressed(int x, int y, int button)
      {}
      
      //--------------------------------------------------------------
      void testApp::mouseReleased(int x, int y, int button)
      {}
      
      //--------------------------------------------------------------
      void testApp::windowResized(int w, int h)
      {}
      

      这是一个非常基本的设置。随意修改(为保存的数据添加倾斜角度等) 我很确定有多种方法可以快速提高速度(例如,不要更新xCvGrayscaleImage实例,也不要在保存时将图像绘制到屏幕上,或者堆叠几帧并按间隔写入,而不是每帧都写等。 )

      古德勒克