我目前正在开发基于SDK 0.3.2和OpenGL 3.2.0的基本Oculus Rift应用程序,让我习惯3D引擎和Oculus Rift技术。
现在我想在Oculus内部渲染一个旋转的立方体。对于失真,我决定在SDK中使用由Oculus Rift团队提供的渲染引擎。 所以,首先我将场景渲染为一个纹理,其中包含两个不同背景颜色的场景副本,以便进行可视化(现在我不会对立体方面感到烦恼),而且我知道它有效:
窗口太大,无法在屏幕上完全显示,但我们可以清楚地看到相同的场景。窗口在其中心切割
编辑4(最终):
经过很多的试验和错误,以及Jherico的建议后,我设法让一切都运行起来。
看来,重新编写编辑3 ,我不得不每帧重新绑定顶点缓冲区glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
。可以找到代码(没有立体声或移动跟踪)here。 Nota:模式MODE_OCULUS_DEBUG_
不再起作用。 Aaa和图像被翻转。
编辑3:
我跳过SDK 0.4.1,并遵循Jherico的所有建议(顺便说一句,非常感谢),我最终得到了这个。我真的不知道发生了什么。但我注意到的一件事是我有时必须使用glBindBuffer
绑定帧缓冲区,有时候glBindFramebuffer
...在minf中保持纹理仍然与开头相同,在第一个屏幕截图中。
我觉得这是关于帧定时的问题。如果我足够快地触发第一帧的一半的过程,我在第一帧上没有错误。 请注意,我每次触发半个渲染,这意味着我必须触发两次才能获得一个帧。然后第二个始终与第一个相同,第三个可以是光滑,如果我触发它迟到,然后它总是第四帧后的毛刺。毛刺只出现一次,即使我等了很长时间,也无法让它出现两次。
我明天会尝试对此进行调查,但如果您有任何想法,或者这是一个常见的OpenGL错误或误解,欢迎您提供帮助:)
您可以找到the code here
*编辑1:
我使用帧缓冲区在纹理中绘制场景。在ovrHmd_BeginFrame(hmd, 0)
语句之后,我绑定帧缓冲区以进行屏幕外渲染:
请注意,textures []包含网格和星形,使用shaderProgram中的片段着色器混合
glBindBuffer(GL_FRAMEBUFFER, frameBuffer);
glBindVertexArray(vertexArrayObject);
glEnable(GL_DEPTH_TEST);
glUseProgram(shaderProgram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
framebuffer
在该函数中创建,在init_ovr()
之后但在init_render_ovr()
之前调用:
int init_framebuffers(){
// Framebuffers
//-----------------------------------------------
// In order to display, it has to be "complete" (at least 1 color/depth/stencil buffer attached, 1color attachement, same number of multisamples, attachement completes)
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
// ---- Texture "Color Buffer"
glGenTextures(1, &renderedTex);
glBindTexture(GL_TEXTURE_2D, renderedTex);
glTexImage2D(renderedTex, 0, GL_RGB, renderTargetSize.w / 2, renderTargetSize.h, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Attaching the color buffer to the frame Buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTex, 0);
// ---- RenderBuffer
// Render Buffer (to be able to render Depth calculation)
GLuint rboDepthStencil;
glGenRenderbuffers(1, &rboDepthStencil);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepthStencil);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, renderTargetSize.w / 2, renderTargetSize.h);
// Attaching the render buffer to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rboDepthStencil);
// Binding the Frame Buffer so the rendering happens in it
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
return 0;
}
*编辑2: 好的,现在这就是我的渲染循环:
ovrFrameTiming hdmFrameTiming = ovrHmd_BeginFrame(hmd,0);
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++){
ovrEyeType eye = hmdDesc.EyeRenderOrder[eyeIndex];
ovrPosef eyePose = ovrHmd_BeginEyeRender(hmd, eye);
// Clear the screen and the depth buffer (as it is filled with 0 initially,
// nothing will be draw (0 = on top);
glClearColor(0.0f, 0.0f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_FRAMEBUFFER);
// Drawing in the FrameBuffer
glBindBuffer(GL_FRAMEBUFFER, frameBuffer);
glBindVertexArray(vertexArrayObject);
glEnable(GL_DEPTH_TEST);
glUseProgram(shaderProgram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
if (eye == ovrEye_Right){
glScissor(renderTargetSize.w / 2, 0, renderTargetSize.w / 2, renderTargetSize.h);
glViewport(renderTargetSize.w / 2, 0, renderTargetSize.w / 2, renderTargetSize.h);
}else{
glScissor(0, 0, renderTargetSize.w / 2, renderTargetSize.h);
glViewport(0, 0, renderTargetSize.w / 2, renderTargetSize.h);
}
if (eye == ovrEye_Right)
glClearColor(0.0f, 0.3f, 0.0f, 1.0f);
else
glClearColor(0.3f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Turn around Z
trans = glm::rotate(
trans,
0.7f,
glm::vec3(0.0f, 0.0f, 1.0f)
);
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, glm::value_ptr(trans));
// Drawing
glDrawArrays(GL_TRIANGLES, 0, 36);
// Unbind the custom frame Buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
ovrHmd_EndEyeRender(hmd, eye, eyePose, &EyeTexture[eye].Texture);
}
ovrHmd_EndFrame(hmd);
我的渲染配置:
EyeTexture[0].OGL.Header.API = ovrRenderAPI_OpenGL;
EyeTexture[0].OGL.Header.TextureSize = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport.Size = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport.Pos.x = 0;
EyeTexture[0].OGL.Header.RenderViewport.Pos.y = 0;
EyeTexture[0].OGL.TexId = renderedTex;
EyeTexture[1].OGL.Header.API = ovrRenderAPI_OpenGL;
EyeTexture[1].OGL.Header.TextureSize = recommendedTex1Size;
EyeTexture[1].OGL.Header.RenderViewport.Size = recommendedTex1Size;
EyeTexture[1].OGL.Header.RenderViewport.Pos.x = recommendedTex1Size.w;
EyeTexture[1].OGL.Header.RenderViewport.Pos.y = 0;
EyeTexture[1].OGL.TexId = renderedTex;
但我仍然无法显示任何内容。
原帖:
所以现在我想使用Oculus SDK将这个纹理扭曲成一个桶。为此,我初始化Oculus引擎:
int init_ovr(){
// Init the OVR library
ovr_Initialize();
// Create the software device and connect the physical device
hmd = ovrHmd_Create(0);
if (hmd)
ovrHmd_GetDesc( hmd, &hmdDesc );
else
return 1;
//Configuring the Texture size (bigger than screen for barrel distortion)
recommendedTex0Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Left, hmdDesc.DefaultEyeFov[0], 1.0f);
recommendedTex1Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Right, hmdDesc.DefaultEyeFov[1], 1.0f);
renderTargetSize.w = recommendedTex0Size.w + recommendedTex1Size.w;
renderTargetSize.h = std::max( recommendedTex0Size.h, recommendedTex1Size.h );
return 0;
}
和渲染:
int init_render_ovr(){
// Configure rendering with OpenGL
ovrGLConfig cfg;
cfg.OGL.Header.API = ovrRenderAPI_OpenGL;
cfg.OGL.Header.RTSize = OVR::Sizei( hmdDesc.Resolution.w, hmdDesc.Resolution.h );
cfg.OGL.Header.Multisample = 0;
cfg.OGL.Window = sdl_window_info.info.win.window;
ovrFovPort eyesFov[2];
// I also tried = { hmdDesc.DefaultEyeFov[0], hmdDesc.DefaultEyeFov[1] };
if ( !ovrHmd_ConfigureRendering(hmd, &cfg.Config, ovrDistortionCap_Chromatic | ovrDistortionCap_TimeWarp, eyesFov, eyesRenderDesc) )
return 1;
EyeTexture[0].OGL.Header.API = ovrRenderAPI_OpenGL;
EyeTexture[0].OGL.Header.TextureSize = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport = eyesRenderDesc[0].DistortedViewport;
EyeTexture[0].OGL.TexId = renderedTex;
EyeTexture[1].OGL.Header.API = ovrRenderAPI_OpenGL;
EyeTexture[1].OGL.Header.TextureSize = recommendedTex1Size;
EyeTexture[1].OGL.Header.RenderViewport = eyesRenderDesc[1].DistortedViewport;
EyeTexture[1].OGL.TexId = renderedTex;
return 0;
}
最后,我进入主渲染循环
ovrFrameTiming hdmFrameTiming = ovrHmd_BeginFrame(hmd, 0);
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++){
ovrEyeType eye = hmdDesc.EyeRenderOrder[eyeIndex];
ovrPosef eyePose = ovrHmd_BeginEyeRender(hmd, eye);
if (eye == ovrEye_Right){
glScissor(renderTargetSize.w / 2, 0, renderTargetSize.w / 2, renderTargetSize.h);
glViewport(renderTargetSize.w / 2, 0, renderTargetSize.w / 2, renderTargetSize.h);
}else{
glScissor(0, 0, renderTargetSize.w / 2, renderTargetSize.h);
glViewport(0, 0, renderTargetSize.w / 2, renderTargetSize.h);
}
if (eye == ovrEye_Right)
glClearColor(0.0f, 0.3f, 0.0f, 1.0f);
else
glClearColor(0.3f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Turn around Z
trans = glm::rotate(
trans,
0.7f,
glm::vec3(0.0f, 0.0f, 1.0f)
);
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, glm::value_ptr(trans));
// Drawing
glDrawArrays(GL_TRIANGLES, 0, 36);
ovrHmd_EndEyeRender(hmd, eye, eyePose, &EyeTexture[eye].Texture);
}
ovrHmd_EndFrame(hmd);
但结果是一个简单的黑屏。 我试图将glViewport全部移动,手动设置EyeRenderDesc结构,我读了两遍文档并严格遵循它......但没有任何帮助。
我忘记了某件事吗?我现在无法获得任何想法。
答案 0 :(得分:4)
代码有几个问题。首先,您正在错误地设置EyeTexture
结构。
EyeTexture[0].OGL.Header.TextureSize = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport = eyesRenderDesc[0].DistortedViewport;
您在此处使用的DistortedViewport
值指定物理屏幕上将放置渲染场景的扭曲视图的区域。另一方面,纹理RenderViewport
值应该是您渲染场景的屏幕外纹理的区域。由于您几乎总是想要渲染到完整的纹理,因此通常就足够了:
EyeTexture[0].OGL.Header.TextureSize = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport.Size = recommendedTex0Size;
EyeTexture[0].OGL.Header.RenderViewport.Pos.x = 0;
EyeTexture[0].OGL.Header.RenderViewport.Pos.y = 0;
您在两个位置都设置了相同的纹理ID。
EyeTexture[0].OGL.TexId = renderedTex;
EyeTexture[1].OGL.TexId = renderedTex;
你可以这样做,但是你必须使纹理的宽度增加两倍,并将每个目标的相应RenderViewport值设置为纹理的一半。就个人而言,我只是为每只眼睛使用不同的纹理来避免这种情况。
接下来,在您提供的代码示例中,您没有渲染到屏幕外缓冲区。
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++){
ovrEyeType eye = hmdDesc.EyeRenderOrder[eyeIndex];
ovrPosef eyePose = ovrHmd_BeginEyeRender(hmd, eye);
... lots of openGL viewport and draw calls ...
ovrHmd_EndEyeRender(hmd, eye, eyePose, &EyeTexture[eye].Texture);
}
但是,要使Oculus SDK失真起作用,您无法直接绘制到主帧缓冲区。您需要创建针对EyeTexture
数组中标识的纹理的屏幕外帧缓冲区。
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++){
ovrEyeType eye = hmdDesc.EyeRenderOrder[eyeIndex];
ovrPosef eyePose = ovrHmd_BeginEyeRender(hmd, eye);
... activate offscreen framebuffer ...
... lots of openGL viewport and draw calls ...
... decativate offscreen framebuffer ...
ovrHmd_EndEyeRender(hmd, eye, eyePose, &EyeTexture[eye].Texture);
}
您可以看到此here的完整示例。当然,该示例已针对0.4.x进行了更新,因此您可能希望返回历史记录,直到看到ovrHmd_EndEyeRender
(它已在最新的SDK中删除)。
修改强>
我查看了您发布的代码。它不完整,因为您没有包含您所依赖的标头。无论如何,仍有一些明显的问题:
// Clear the screen and the depth buffer (as it is filled with 0 initially,
// nothing will be draw (0 = on top);
glClearColor(0.0f, 0.0f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_FRAMEBUFFER);
// Drawing in the FrameBuffer
glBindBuffer(GL_FRAMEBUFFER, frameBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepthStencil);
glEnable(GL_DEPTH_TEST);
glFramebufferTexture2D
时,它应该不,但实际上它应该在你设置后随时被你绑定。 init_render_ovr
中您完全设置了EyeTexture [0],但您只在EyeTexture 1中设置了几个值。 EyeTexture 1值不默认为EyeTexture [0]中的值。您必须初始化每个成员。 在您解决了黑屏问题之前,您的代码应该充满了glGetError()调用。我有一个GL_CHECK_ERROR,它在调试版本上扩展为这样的东西:
GLenum errorCode = glGetError(); if(errorCode!= 0){ 抛出错误(errorCode); }