iOS:渲染和读取深度缓冲以进行屏幕外处理

时间:2012-06-09 03:14:46

标签: ios touch opengl-es-2.0 depth-buffer glreadpixels

问题

我的主要目标是获取设备上触摸的模型坐标,以便检查您触摸的内容。我正在使用一个大型模型,必须绘制许多东西,这些东西也必须是可触摸的。

为了达到这个目的,我知道两种可能的方法:一方面,我们可以进行光线投射,并将相机的指向矢量与模型相交,我们必须将它存储在内存中的某些位置。另一方面,那就是我想做的事情,我们可以用旧时尚做到:

function gluUnProject(winx, winy, winz: TGLdouble; 
                  const modelMatrix: TGLMatrixd4; 
                  const projMatrix: TGLMatrixd4; 
                  const viewport: TGLVectori4; 
                  objx, objy, objz

并将屏幕坐标转换回模型坐标。我纠正到这儿了吗?你知道其他opengl应用程序中的触摸处理方法吗? 正如您所看到的,该函数将winz作为参数,这将是屏幕坐标处片段的高度,此信息通常来自深度缓冲区。我已经知道opengl-es 2.0不提供对它内部使用的深度缓冲区的访问,就像在“普通”opengl中一样。那我怎么能得到这些信息呢?

Apple提供了两种可能性。创建具有深度附件的屏幕外帧缓冲区,或者将深度信息渲染到纹理中。遗憾的是,对我来说,手册没有显示将信息读回iOS的方法。我想我必须使用glReadPixels并从它们读回来。我实现了我能找到的所有东西,但无论我如何设置,我都无法从屏幕外帧缓冲区或纹理中获得正确的高度结果。我希望得到一个带有z值的GL_FLOAT。

Z:28550323

r:72 g:235 b:191 [3]:1< - 总是这个

代码

gluUnProject

就像我们现在一样,iOS库中没有glu库,所以我根据这个来源查找代码并实现了以下方法:link。 GLKVector2屏幕输入变量是屏幕上的X,Y坐标,由UITabGestureRecognizer读取

-(GLKVector4)unprojectScreenPoint:(GLKVector2)screen {

//get active viewport
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
NSLog(@"viewport [0]:%d [1]:%d [2]:%d [3]:%d", viewport[0], viewport[1], viewport[2], viewport[3]);

//get matrices
GLKMatrix4 projectionModelViewMatrix = GLKMatrix4Multiply(_modelViewMatrix, _projectionMatrix);
projectionModelViewMatrix = GLKMatrix4Invert(projectionModelViewMatrix, NULL);

//in ios Y is inverse
screen.v[1] = viewport[3]-screen.v[1];
NSLog(@"screen: [0]:%.2f [1]:%.2f", screen.v[0], screen.v[1]);

//read from the depth component of the last rendererd offscreen framebuffer
/*
GLubyte z;
glBindFramebuffer(GL_FRAMEBUFFER, _depthFramebuffer);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_DEPTH_COMPONENT16, GL_UNSIGNED_BYTE, &z);
NSLog(@"z:%c", z);
*/

//read from the last rendererd depth texture
Byte rgb[4];
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glReadPixels(screen.v[0], screen.v[1], 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, &z);
glBindTexture(GL_TEXTURE_2D, 0);
NSLog(@"r:%d g:%d b:%d [3]:%d", rgb[0], rgb[1], rgb[2], rgb[3]);

GLKVector4 in = GLKVector4Make(screen.v[0], screen.v[1], 1, 1.0);

/* Map x and y from window coordinates */
in.v[0] = (in.v[0] - viewport[0]) / viewport[2];
in.v[1] = (in.v[1] - viewport[1]) / viewport[3];

/* Map to range -1 to 1 */
in.v[0] = in.v[0] * 2.0 - 1.0;
in.v[1] = in.v[1] * 2.0 - 1.0;
in.v[2] = in.v[2] * 2.0 - 1.0;

GLKVector4 out = GLKMatrix4MultiplyVector4(projectionModelViewMatrix, in);
if(out.v[3]==0.0) {
    NSLog(@"out.v[3]==0.0");
    return GLKVector4Make(0.0, 0.0, 0.0, 0.0);
}
out.v[0] /= out.v[3];
out.v[1] /= out.v[3];
out.v[2] /= out.v[3];

return out;

}

它尝试从深度缓冲区或深度纹理中读取数据,这些数据都是在绘制时生成的。我知道这段代码的效率非常低,但首先必须在我清理之前运行。

我尝试仅绘制额外的帧缓冲区(此处已注释掉),仅绘制到块中,并且没有成功。

绘制

-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {

    glUseProgram(_program);

    //http://stackoverflow.com/questions/10761902/ios-glkit-and-back-to-default-framebuffer
    GLint defaultFBO;
    glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
    GLint defaultRBO;
    glGetIntegerv(GL_RENDERBUFFER_BINDING_OES, &defaultRBO);
    //GLint defaultDepthRenderBuffer;
    //glGetIntegerv(GL_Depth_B, &defaultRBO);

    GLuint width, height;
    //width = height = 512;
    width = self.view.frame.size.width;
    height = self.view.frame.size.height;


    GLuint framebuffer;
    glGenFramebuffers(1, &framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

    /* method offscreen framebuffer
    GLuint depthRenderbuffer;
    glGenRenderbuffers(1, &depthRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
    */

    //method render to texture
    glActiveTexture(GL_TEXTURE1);
    //https://github.com/rmaz/Shadow-Mapping        
    //http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html
    GLuint depthTexture;
    glGenTextures(1, &depthTexture);
    glBindTexture(GL_TEXTURE_2D, depthTexture);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // we do not want to wrap, this will cause incorrect shadows to be rendered
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // set up the depth compare function to check the shadow depth
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_EXT, GL_LEQUAL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_EXT, GL_COMPARE_REF_TO_TEXTURE_EXT);

    //glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8_OES,  width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);


    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
    if(status != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"failed to make complete framebuffer object %x", status);
    }

    GLenum glError = glGetError();
    if(GL_NO_ERROR != glError) {
        NSLog(@"Offscreen OpenGL Error: %d", glError);
    }

    glClear(GL_DEPTH_BUFFER_BIT);
    //glCullFace(GL_FRONT);
    glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
    glUniform1i(_uniforms.renderMode, 1);

    //        
    //Drawing calls
    //

    _depthTexture = depthTexture;
    //_depthFramebuffer = depthRenderbuffer;

    // Revert to the default framebuffer for now
    glBindFramebuffer(GL_FRAMEBUFFER, defaultFBO);
    glBindRenderbuffer(GL_RENDERBUFFER, defaultRBO);


    // Render normally
    glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
    glClearColor(0.316f, 0.50f, 0.86f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //glCullFace(GL_BACK);

    glUniform1i(_uniforms.renderMode, 0);
    [self update];

    //        
    //Drawing calls
    //        
}

}

z的值可以来自不同的类型。它只是一个必须以float数据类型放回的浮点数吗?

感谢您的支持! patte

1。修改

我现在从我渲染的纹理中获取RGBA。为此,我在绘制时激活单独的帧缓冲区,但不激活深度扩展,并将纹理连接到它。我编辑了上面的代码。 现在我得到以下值:

screen: [0]:604.00 [1]:348.00
r:102 g:102 b:102 [3]:255

screen: [0]:330.00 [1]:566.00
r:73 g:48 b:32 [3]:255

screen: [0]:330.00 [1]:156.00
r:182 g:182 b:182 [3]:255

screen: [0]:266.00 [1]:790.00
r:80 g:127 b:219 [3]:255

screen: [0]:548.00 [1]:748.00
r:80 g:127 b:219 [3]:255

如您所见,读取rgba值。好消息是,当我触摸天空时,不再有模型,值总是相同的,并且在触摸模型时,它会有所不同。所以我认为纹理应该是正确的。但是,我现在如何重新组合这4个字节的实际值,然后可以传递给gluUnProject?我不能把它扔到浮子上。

0 个答案:

没有答案