我使用Assimp使用OpenGL ES 2.0渲染3D模型。我目前遇到一个奇怪的问题,即模型的某些部分是不可见的,即使应该是这样。在这些图片中很容易看到它:
在第二个图像中,我将z缓冲区(线性化版本)渲染到屏幕中以查看它是否可能是z缓冲区问题。相机附近有黑色像素:
我尝试更改z-near和z-far的值而没有任何影响。现在我在初始化时这样做:
glEnable(GL_CULL_FACE);// Cull back facing polygons
glEnable(GL_DEPTH_TEST);
我也在每一帧都这样做:
glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
我认为这可能是一个面部缠绕问题,所以我试图禁用GL_CULL_FACE,但它没有用。我很确定模型很好,因为Blender可以正确渲染它。
我现在正在使用这些着色器:
// vertex shader
uniform mat4 u_ModelMatrix; // A constant representing the model matrix.
uniform mat4 u_ViewMatrix; // A constant representing the view matrix.
uniform mat4 u_ProjectionMatrix; // A constant representing the projection matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
void main()
{
// Transform the vertex into eye space.
mat4 u_ModelViewMatrix = u_ViewMatrix * u_ModelMatrix;
v_Position = vec3(u_ModelViewMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_ModelViewMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_ProjectionMatrix * u_ModelViewMatrix * a_Position;
}
这是片段着色器:
// fragment shader
uniform sampler2D u_Texture; // The input texture.
uniform int u_TexCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
vec3 u_LightPos = vec3(1.0);
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / distance);
// Add ambient lighting
diffuse = diffuse + 0.2;
diffuse = 1.0;
//gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));// Textured version
float d = (2.0 * 0.1) / (100.0 + 0.1 - gl_FragCoord.z * (100.0 - 0.1));
gl_FragColor = vec4(d, d, d, 1.0);// z-buffer render
}
我使用带索引的VBO来加载几何和东西。
当然我可以粘贴一些您认为可能相关的其他代码,但是现在我很高兴能够了解可能导致这种奇怪行为的一些想法,或者我可以做的一些可能的测试。
答案 0 :(得分:2)
好的,我解决了这个问题。我发布解决方案,因为它可能对未来的googlers有用。
基本上我没有要求深度缓冲区。我在本机代码中执行所有渲染工作,但所有Open GL上下文初始化都在Java端完成。我使用了一个Android示例(GL2JNIActivity)作为起点,但他们没有请求任何深度缓冲,我没有注意到。
设置ConfigChooser时,我解决了将深度缓冲区大小设置为24的问题:
setEGLConfigChooser( translucent ?
new ConfigChooser(8, 8, 8, 8, 24 /*depth*/, 0) :
new ConfigChooser(5, 6, 5, 0, 24 /*depth*/, 0 );