光栅化算法:找到" ST" 2D四边形和反投影中的点坐标

时间:2015-01-18 17:48:45

标签: c++ 3d rendering rasterizing

我的目标是使用光栅化算法渲染四边形的图像。我一直到目前为止:

  • 在3D中创建四元组
  • 使用透视分割将四边形顶点投影到屏幕上
  • 将结果坐标从屏幕空间转换为栅格空间,并计算栅格空间中四边形的边界框
  • 循环此边界框内的所有像素,并查明当前像素P是否包含在四边形内。为此,我使用一个简单的测试,包括取四边形的边AB和顶点A和点P之间定义的矢量之间的点。我对所有4个边重复此过程,如果符号相同,那么这一点就在四边形内部。

我已成功实现此功能(请参阅下面的代码)。但我仍然坚持使用我想要玩的其余部分,这实际上是找到我的四边形的st或纹理坐标。

  • 我不知道是否可以在光栅空间的四边形中找到当前像素P的st坐标,然后将其转换回世界空间?请问有人请指出我正确的方向告诉我该怎么做?
  • 或者我如何计算四边形中包含的像素的z或深度值。我想这与找到四边形中点的st坐标,然后插入顶点的z值有关吗?

PS:这不是作业。我这样做是为了理解光栅化算法,并且正是我现在被困住的地方,我不明白我相信GPU渲染管道涉及某种反投影,但我只是在这一点上迷失了。谢谢你的帮助。

    Vec3f verts[4]; // vertices of the quad in world space
    Vec2f vraster[4]; // vertices of the quad in raster space
    uint8_t outside = 0; // is the quad in raster space visible at all?
    Vec2i bmin(10e8), bmax(-10e8);
    for (uint32_t j = 0; j < 4; ++j) {
        // transform unit quad to world position by transforming each
        // one of its vertices by a transformation matrix (represented
        // here by 3 unit vectors and a translation value)
        verts[j].x = quads[j].x * right.x + quads[j].y * up.x + quads[j].z * forward.x + pt[i].x;
        verts[j].y = quads[j].x * right.y + quads[j].y * up.y + quads[j].z * forward.y + pt[i].y;
        verts[j].z = quads[j].x * right.z + quads[j].y * up.z + quads[j].z * forward.z + pt[i].z;

        // project the vertices on the image plane (perspective divide)
        verts[j].x /= -verts[j].z;
        verts[j].y /= -verts[j].z;

        // assume the image plane is 1 unit away from the eye
        // and fov = 90 degrees, thus bottom-left and top-right
        // coordinates of the screen are (-1,-1) and (1,1) respectively.
        if (fabs(verts[j].x) > 1 || fabs(verts[j].y) > 1) outside |= (1 << j);

        // convert image plane coordinates to raster
        vraster[j].x = (int32_t)((verts[j].x + 1) * 0.5 * width);
        vraster[j].y = (int32_t)((1 - (verts[j].y + 1) * 0.5) * width);


        // compute box of the quad in raster space
        if (vraster[j].x < bmin.x) bmin.x = (int)std::floor(vraster[j].x);
        if (vraster[j].y < bmin.y) bmin.y = (int)std::floor(vraster[j].y);
        if (vraster[j].x > bmax.x) bmax.x = (int)std::ceil(vraster[j].x);
        if (vraster[j].y > bmax.y) bmax.y = (int)std::ceil(vraster[j].y);
    }

    // cull if all vertices are outside the canvas boundaries
    if (outside == 0x0F) continue;

    // precompute edge of quad
    Vec2f edges[4];
    for (uint32_t j = 0; j < 4; ++j) {
        edges[j] = vraster[(j + 1) % 4] - vraster[j];
    }

    // loop over all pixels contained in box
    for (int32_t y = std::max(0, bmin.y); y <= std::min((int32_t)(width -1), bmax.y); ++y) {
        for (int32_t x = std::max(0, bmin.x); x <= std::min((int32_t)(width -1), bmax.x); ++x) {
            bool inside = true;
            for (uint32_t j = 0; j < 4 && inside; ++j) {
                Vec2f v = Vec2f(x + 0.5, y + 0.5) - vraster[j];
                float d = edges[j].x * v.x + edges[j].y * v.y;
                inside &= (d > 0);
            }
            // pixel is inside quad, mark in the image
            if (inside) {
                buffer[y * width + x] = 255;
            }
        }
    }

0 个答案:

没有答案