我试图从头开始在C中写一个非常基本的光线跟踪器。我在空间的三维点上获得二维坐标时遇到了一些麻烦。
我能够轻松准确地拍摄相机光线,足以在屏幕上轻松绘制模型。现在我想在屏幕上为模型定义一个边界框,这样我就可以快速移动图像中与渲染过程无关的区域。
当物体靠近相机时,最接近的结果是更好。当我向后推相机(增加cam-> pos.z)时,我的边界框开始变得远离。
我已经在stackoverflow上看到了十几个不同的公式,我们一直在试验/申请,但还没有让它们成功运作。如果有人可以看看我在做什么,并提供一些很好的建议/纠正。
我的坐标是y-up。
我正在使用的函数来获取光线方向(似乎正在工作,但认为它可能是相关的):
#define FOV 55 // placeholder - should move to camera attr
double deg2rad(double degrees)
{
return degrees * M_PI / 180.0;
}
double rad2deg(double radians)
{
return radians * 180.0 / M_PI;
}
void ray_fromPixel(Camera *camera, Resolution *res, Ray *ray, int x, int y) {
ray->origin = camera->position;
ray->orientation.x = x - (res->width/2);
ray->orientation.y = (res->height / 2) - y;
ray->orientation.z = -(res->height / 2) / tan(deg2rad(FOV*0.5));
vector_Normalize(&ray->orientation);
}
3d到2d坐标版本(工作不太好):
Coordinate coord_fromVector(Vector *v, Vector *cam_pos, int width, int height) {
Coordinate screen;
double hwidth = ((double)width) / 2;
double hheight = ((double)height) / 2;
double x_3D = v->x - cam_pos->x;
double y_3D = v->y - cam_pos->y;
double z_3D = v->z - cam_pos->z;
screen.x = hwidth + (x_3D / z_3D) * hwidth;
screen.y = hheight - (y_3D / z_3D) * hheight;
/* aspect ratio is commented out as its moving the box off screen */
/*
double aspect = screen.resolution.width / screen.resolution.height;
if (aspect > 1.0) {
screen.x /= aspect;
} else {
screen.y *= aspect;
}
*/
return screen;
}
编辑:尝试使用一些矩阵......
void matrix_projection(double ***m, double fov,
double aspect,double znear, double zfar)
{
double xymax = znear * tan(fov * M_PI / 360);
double ymin = -xymax;
double xmin = -xymax;
double width = xymax - xmin;
double height = xymax - ymin;
double depth = zfar - znear;
double q = -(zfar + znear) / depth;
double qn = -2 * (zfar * znear) / depth;
double w = 2 * znear / width;
w = w / aspect;
double h = 2 * znear / height;
(*m)[0][0] = w; (*m)[1][0] = 0; (*m)[2][0] = 0; (*m)[3][0] = 0;
(*m)[0][1] = 0; (*m)[1][1] = h; (*m)[2][1] = 0; (*m)[3][1] = 0;
(*m)[0][2] = 0; (*m)[1][2] = 0; (*m)[2][2] = q; (*m)[3][2] = qn;
(*m)[0][3] = 0; (*m)[1][3] = 0; (*m)[2][3] = -1; (*m)[3][3] = 0;
}
//projection_matrix * modelview_matrix * world_coordinates
void matrix_multiply(double ***result, double ***m1, int r1,
int c1, double ***m2, int c2)
{
int i, j, k;
for(i=0; i<r1; ++i)
for(j=0; j<c2; ++j)
for(k=0; k<c1; ++k)
{
(*result)[i][j] += ((*m1)[i][k]) * ((*m2)[k][j]);
}
}
void matrix_allocate(double ***matrix, int rows, int cols)
{
int x, y;
*matrix = (double**) malloc(sizeof(double *) * rows);
for (x=0; x < rows; x++) {
(*matrix)[x] = malloc(sizeof(double) * cols);
for(y=0; y < cols; y++) {
(*matrix)[x][y] = 0.0;
}
}
}
Coordinate coord_fromVector(Vector *v, Vector *cam_pos, int width, int height) {
Coordinate screen;
double hwidth = ((double)width) / 2;
double hheight = ((double)height) / 2;
double aspect = hwidth / hheight;
double **proj;
matrix_allocate(&proj, 4, 4);
double **model;
matrix_allocate(&model, 4, 4);
double **vmatrix;
matrix_allocate(&vmatrix, 1, 4);
vmatrix[0][1] = v->x;
vmatrix[0][2] = v->y;
vmatrix[0][3] = v->z;
vmatrix[0][4] = 1;
matrix_projection(&proj, 55, aspect, 1.0, 2000.0);
model[0][0] = model[1][1] = model[2][2] = model[3][3] = 1;
model[3][0] = cam_pos->x;
model[3][1] = cam_pos->y;
model[3][2] = cam_pos->z;
double **proj_model;
matrix_allocate(&proj_model, 4, 4);
matrix_multiply(&proj_model, &proj, 4, 4, &model, 4) ;
double **proj_model_v;
matrix_allocate(&proj_model_v, 1, 4);
matrix_multiply(&proj_model_v, &proj_model, 4, 4, &vmatrix, 1);
screen.x = proj_model_v[0][0] / proj_model_v[0][3];
screen.y = proj_model_v[0][1] / proj_model_v[0][3];
return screen;
}
在这个阶段,遗憾的是还有很长的路要走。