如何从depthMapRealWorld()方法有效地存储过去的深度像素数据?

时间:2012-03-25 20:28:02

标签: processing kinect

我无法解决SimpleOpenNI处理的一个特殊问题,我正在寻求你的帮助。

我想在离散的时间间隔内存储像素深度数据的快照(由.depthMapRealWorld()方法作为PVector数组返回),然后进一步处理它们以进行演示。我尝试在ArrayList中添加它们,但似乎depthMapRealWorld()方法只返回对当前深度数据的引用,而不是真正的数组。我尝试了这个顺序:

  1. 只需获取数据并将其添加到arraylist中。在每次调用update()方法时,整个arraylist包含相同的PVector数组,即使零位置的数组被添加了很多次迭代!

  2. 然后我制作了PVector数组及其创建时间,这是一个类的一部分。稍微重写了草图,但它没有帮助。 arraylist werw中的所有数组仍然相同。

  3. 最后,在类的构造函数中,我“手动”将每个向量的xyz坐标从PVector数组复制到int数组中。这似乎解决了这个问题 - arraylist中的int数组现在彼此不同。但是这个解决方案引入了严重的性能问题。

  4. 问题是:是否有更有效的方法来存储这些PVector阵列并保留其价值?

    代码:

    import processing.opengl.*;
    import SimpleOpenNI.*;
    
    SimpleOpenNI kinect;
    
    
    float rotation = 0;
    int time = 0;
    ArrayList dissolver;
    ArrayList<Integer> timer;
    int pSize = 10;
    Past past;
    
    void setup()  {
      dissolver = new ArrayList();
      timer = new ArrayList();
      size(1024, 768, OPENGL);
      kinect = new SimpleOpenNI(this);
      kinect.enableDepth();
      translate(width/2, height/2, -100);
      rotateX(radians(180));
      stroke(255);
    }
    
    void draw() {
      background(0);
      translate(width/2, height/2, 500);
      rotateX(radians(180));
      kinect.update();
      stroke (255, 255, 255);
    
      past = new Past (kinect.depthMapRealWorld(), time);
      if (dissolver.size() == pSize) {  //remove the oldest arraylist element if when list gets full
        dissolver.remove(0); //
      }  
    
      if (time % 20 == 0) {
        dissolver.add (past);
        Past p1 = (Past) dissolver.get (0);
        float [][] o2 = p1.getVector(); 
        println ("x coord of a random point at arraylist position 0: " + o2[50000][0]); //for testing
      }
      if (dissolver.size() == pSize-1) {
        //dissolve ();  
      }
      time ++;
    }
    
    void dissolve () { //from the previous nonworking version; ignore
      for (int offset = 0; offset < pSize-1; offset ++) {
        PVector[] offPoints = (PVector[]) dissolver.get (offset);
        int offTime =  timer.get(offset);
        for (int i = 0; i < offPoints.length; i+=10) {
          int col = (time-offTime)*2; //why??
          stroke (255, 0, col);
          PVector currentPoint = offPoints[i];
          if (currentPoint.z <1500) {
            point(currentPoint.x, currentPoint.y, currentPoint.z); // - 2*(time-offTime) + random(0, 100)
          }
        }
      }
    }
    
    class Past {
      private PVector [] depth; //should contain this, not int 
      private float [][] depth1;
      private  int time;
    
      Past (PVector [] now, int t) {
        //should be like this: depth = now;
        //clumsy and performancewise catastrophic solution below
        depth1 = new float [now.length][3];
        for (int i = 0; i< now.length; i+=10) {
          PVector temp = now[i];
          depth1 [i][0] = temp.x;
          depth1 [i][1] = temp.y;
          depth1 [i][2] = temp.z;
    
        }
        //arrayCopy(now, depth); this didn't work either
        time = t;
      }
    
      float [][] getVector () {
        return depth1;
      }
    
      int getTime () {
        return time;
      }
    }
    

1 个答案:

答案 0 :(得分:3)

如果我理解正确,你想为每一帧存储3D位置(光盘的ArrayList),对吧? 如果是这样,您应该能够简单地存储PVectors并在以后引用它们。 这是一个基本的草图来说明这一点:

import processing.opengl.*;
import SimpleOpenNI.*;

SimpleOpenNI kinect;
ArrayList<ArrayList<PVector>> frames = new ArrayList<ArrayList<PVector>>();
ArrayList<PVector> frame;
boolean isRecording = true;
boolean isRecFrame;

void setup()  {
  size(1024, 768, OPENGL);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  stroke(255);
}

void draw() {
  background(0);
  translate(width/2, height/2, 500);
  rotateX(PI);
  translate(0,0,-1000); 

  kinect.update();

  if(isRecording){

    isRecFrame = (frameCount % 20 == 0);//record every 20 frames
    int[]   depthMap = kinect.depthMap();
    int     steps   = 5;  // to speed up the drawing, draw every N point
    int     index;
    PVector realWorldPoint;
    if(isRecFrame) frame = new ArrayList<PVector>();

    for(int y=0;y < kinect.depthHeight();y+=steps)
    {
      for(int x=0;x < kinect.depthWidth();x+=steps)
      {
        index = x + y * kinect.depthWidth();
        if(depthMap[index] > 0)
        { 
          realWorldPoint = kinect.depthMapRealWorld()[index];
          point(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z);
          if(isRecFrame) frame.add(realWorldPoint.get());
        }
      } 
    }
    if(isRecFrame) frames.add(frame); 

  }else{//playback
    ArrayList<PVector> currentFrame = frames.get(frameCount%frames.size());//playback is faster than recording now for testing purposes - add a decent frame counter here at some point
    for(PVector p : currentFrame) point(p.x,p.y,p.z);
  }

}

void keyPressed(){
  if(key == ' ') isRecording = !isRecording;
}

使用SPACE键在录制和播放之间切换。 需要注意的主要是我存储每个深度像素(frame.add(realWorldPoint.get());)的真实世界位置的副本。另外要记住的是,目前您将这些坐标存储在内存中,这些坐标在某些时候会被填充。如果您只存储有限数量的帧应该没问题,否则您可能希望保存到指向磁盘的点。这样您就可以将记录与其他草图一起重复使用。一个基本的方法是在csv文件中疼痛它们:

void saveCSV(ArrayList<PVector> pts){
  String csv = "x,y,z\n";
  for(PVector p : pts) csv += p.x + "," + p.y + "," + p.z + "\n";
  saveStrings("frame_"+frameCount+".csv",csv.split("\n"));
}

另一种方法是使用更合适的点云格式,例如PLY。 保存ASCII PLY非常简单:

void savePLY(ArrayList<PVector> pts){
  String ply = "ply\n";
  ply += "format ascii 1.0\n";
  ply += "element vertex " + pts.size() + "\n";
  ply += "property float x\n";
  ply += "property float y\n";
  ply += "property float z\n";
  ply += "end_header\n";
  for(PVector p : pts)ply += p.x + " " + p.y + " " + p.z + "\n";
  saveStrings("frame_"+frameCount+".ply",ply.split("\n"));
}

您可以稍后使用MeshLab等工具打开/浏览/处理这些文件。