我正在尝试在python中构建透视转换矩阵以与pyOpenGL一起使用。我的视图和模型转换正在工作,但是当我应用我的投影变换时,我得到一个空白屏幕(应该看到从(0,0,+ 1)查看原点的三角形。)
我查看了数学,据我所知,转换应该有效,所以我需要第二双眼来帮助找到问题。
def perspective(field_of_view_y, aspect, z_near, z_far):
fov_radians = math.radians(field_of_view_y)
f = math.tan(fov_radians/2)
a_11 = 1/(f*aspect)
a_22 = 1/f
a_33 = (z_near + z_far)/(z_near - z_far)
a_34 = -2*z_near*z_far/(z_near - z_far)
# a_33 = -(z_far + z_near)/(z_far - z_near)
# a_34 = 2*z_far*z_near/(z_far - z_near)
perspective_matrix = numpy.matrix([
[a_11, 0, 0, 0],
[0, a_22, 0, 0],
[0, 0, a_33, a_34],
[0, 0, -1, 0]
]).T
return perspective_matrix
projection_matrix = perspective(45, 600/480, 0.1, 100)
mvp_matrix = projection_matrix * view_matrix * model_matrix
我正在调换矩阵,因为我非常确定numpy存储矩阵转换为OpenGL需要它的方式。我试过发送矩阵而不转置它,它对输出没有(可见)影响。
这是顶点着色器:
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 MVP;
void main()
{
vec4 p = vec4(position, 1.0);
gl_Position = MVP * p;
}
有人可以确定我的转型可能存在的问题吗?
编辑:我已采用输出矩阵并手动完成计算。在应用透视分割后,平截头体边缘上的所有点都沿着NDC框出现,近点和远点处的z分别被转换为-1,+(由于舍入误差,+ / - 次要精度)。对我来说,这表明我的数学是正确的,问题出在其他地方。这是输出矩阵:
[ 1.93137085 0. 0. 0. ]
[ 0. 2.41421356 0. 0. ]
[ 0. 0. -1.002002 -1. ]
[ 0. 0. 0.2002002 0. ]
答案 0 :(得分:3)
既然你说你正在使用glm :: perspective,那就让我们分析你的代码吧。有一个关键的不一致:
glm::perspective
assert(aspect != valType(0));
assert(zFar != zNear);
#ifdef GLM_FORCE_RADIANS
valType const rad = fovy;
#else
valType const rad = glm::radians(fovy);
#endif
valType tanHalfFovy = tan(rad / valType(2));
detail::tmat4x4<valType> Result(valType(0));
Result[0][0] = valType(1) / (aspect * tanHalfFovy);
Result[1][1] = valType(1) / (tanHalfFovy);
Result[2][2] = - (zFar + zNear) / (zFar - zNear);
Result[2][3] = - valType(1);
Result[3][2] = - (valType(2) * zFar * zNear) / (zFar - zNear);
return Result;
请注意以下一行:
Result[2][2] = - (zFar + zNear) / (zFar - zNear);
将它与您的等价物进行比较:
a_33 = (z_near + z_far)/(z_near - z_far)
请注意,整个语句前面都有一个负号(-
)。你的版本没有这个。
答案 1 :(得分:0)
I have figured out the problem, posting for info in case anyone runs into similar problems in future.
While building up the model, view and projection matrices I introduced a mix of row major and column major matrices. These were introduced because numpy and OpenGL require the matrices in different formats. When operating alone these matrices worked because they could be transposed easily with numpy to produce the correct result.
The problem occurred when combining the matricies. The effect was the transformations were applied in an inconsistent and meaningless order and all points were drawn off screen. This hid errors in the perspective matrix and complicated debugging.
The solution is to ensure all matrices are consistent in the way they store data (either all row major or all column major) and the transposition happens once prior to sending to OpenGL.