C#:如何将3D X Y和Z位置转换为2D X和Y位置?

时间:2015-10-04 14:11:26

标签: vb.net math 3d 2d projection

我想要的是将3D位置转换为2D位置的某种功能,例如

Private Function Get2DPoint(ByRef x As Short, ByRef y As Short, ByRef z As Short)

    Dim newX = x + z '< Some fancy math
    Dim newY = y + z '< Some fancy math

    Dim temp = {newX, newY}
    Return temp

End Function

我已经检查过在线资源,但我很难理解(我没有复制页面中的信息,因为它们已经启动并运行了很长时间,并且在所述页面上有很多信息。):< / p>

请不要告诉我应该使用预先存在的图书馆。我已经阅读了很多关于此事的问题,并使用了像OpenGL这样的东西。或者其他图书馆不是我想做的。

我一直在寻找相当长的一段时间,而且我真的无法理解如何做到这一点,我们将非常感谢所有的帮助。

如果我忘记提供任何信息,请提前告知我们。

注意:

  1. 我使用Visual Studio 2015在Visual Basic中进行编程,但是如果给出的任何代码示例都在C ++,C#,Python,Lua中,那就没问题了。或其他类似的编程语言。

  2. 我希望发布更多链接,但我不具备所需的10+声誉。

  3. 一切顺利,Joseph Foote

1 个答案:

答案 0 :(得分:0)

正如您的linked example所示,您需要为您的多维数据集定义class,为空间中的某个点定义class,为您的相机定义class以及{ {1}}已存在为class的矩形:

Class Diagram

我不会将代码从C#转换为VB.net,因为它可以轻松完成并且这是你的工作:)但我会解释它是如何工作的

首先,您需要初始化要绘制的多维数据集,然后,您需要在System.Drawing.Rectangle中定义3D空间原点的位置,在您的情况下,让我们说希望立方体的中间是(0,0,0)点位于PictureBox中间的中间位置:

PictureBox

现在您需要做的就是随时渲染该图片。在这个例子中,渲染本身是在//Point is also a System.Drawing type. Point origin = new Point(picCube.Width / 2, picCube.Height / 2); Cube本身内完成的,它接收我们刚刚计算的原点,同样在示例中 向上向量始终是Y轴

首先,使用以下签名定义该方法:

class

接下来,声明了3个变量:

  • 临时3D原点。
  • 临时2D原点。
  • 要绘制的24个2D点阵列(立方体将绘制为4个四边形 - 每个边缘将被绘制两次 - 每个四边形一次,在此示例中这是一个不好的做法

这是代码:

//Gets the origin on the PictureBox to be displayed (middle of the PictureBox).
//Returns the rendered picture.
public Bitmap drawCube(Point drawOrigin)

然后,摄像机的Z位置是相对于屏幕分辨率定义的,以保持立方体不被搞砸:

PointF[] point3D = new PointF[24]; //Will be actual 2D drawing points
Point tmpOrigin = new Point(0, 0);
Math3D.Point3D point0 = new Math3D.Point3D(0, 0, 0); //Used for reference

接下来,根据平面的宽度,高度和深度计算立方体在空间中的点数(再次有24个而不是8个,因为它们是每个面部绘制的),//Screen is another System.Drawing class. //Called "zoom" in the example. double baseCameraZ = Screen.PrimaryScreen.Bounds.Width / 1.5; 位置也相应调整,以便立方体适合:

cameraZ

接下来的功能transformations over the points using matrices - 您无需了解其工作原理,但您可能希望了解一下 ,它基本上是应用立方体的旋转并将其定位在相对于原点的3D空间中的固定位置:

        //Just filling a 24 length array of Point3D, you can see in the example their exact order.
        //note that the order matters mostly so each face's vertexes will be together in the array - one after another.
        Math3D.Point3D[] cubePoints = fillCubeVertices(width, height, depth);

        //Calculate the camera Z position to stay constant despite rotation            
        Math3D.Point3D anchorPoint = (Math3D.Point3D)cubePoints[4]; //anchor point
        double cameraZ = -(((anchorPoint.X - cubeOrigin.X) * baseCameraZ) / cubeOrigin.X) + anchorPoint.Z;

        //That's the actual camera of the cube - read the example itself for more info.
        camera1.Position = new Math3D.Point3D(cubeOrigin.X, cubeOrigin.Y, cameraZ);

下一段代码将空间中立方体的3D点转换为它们在结果2D图像中所属的位置,同时对点落在相机后面的情况进行特殊检查(这是{的{ {1}}声明)。再次,如果你想真正理解它,你需要学习一些基本的Linear Algebra

//Apply Rotations, moving the cube to a corner then back to middle
cubePoints = Math3D.Translate(cubePoints, cubeOrigin, point0);
cubePoints = Math3D.RotateX(cubePoints, xRotation); //The order of these
cubePoints = Math3D.RotateY(cubePoints, yRotation); //rotations is the source
cubePoints = Math3D.RotateZ(cubePoints, zRotation); //of Gimbal Lock
cubePoints = Math3D.Translate(cubePoints, point0, cubeOrigin);

最后一件事是使用if绘制整个图像:

Math3D.Point3D vec;
for (int i = 0; i < point3D.Length; i++)
{
    vec = cubePoints[i];

    if (vec.Z - camera1.Position.Z >= 0)
    {
        point3D[i].X = (int)((double)-(vec.X - camera1.Position.X) / (-0.1f) * baseCameraZ) + drawOrigin.X;
        point3D[i].Y = (int)((double)(vec.Y - camera1.Position.Y) / (-0.1f) * baseCameraZ) + drawOrigin.Y;
    }
    else
    {
        tmpOrigin.X = (int)((double)(cubeOrigin.X - camera1.Position.X) / (double)(cubeOrigin.Z - camera1.Position.Z) * baseCameraZ) + drawOrigin.X;
        tmpOrigin.Y = (int)((double)-(cubeOrigin.Y - camera1.Position.Y) / (double)(cubeOrigin.Z - camera1.Position.Z) * baseCameraZ) + drawOrigin.Y;

        point3D[i].X = (float)((vec.X - camera1.Position.X) / (vec.Z - camera1.Position.Z) * baseCameraZ + drawOrigin.X);
        point3D[i].Y = (float)(-(vec.Y - camera1.Position.Y) / (vec.Z - camera1.Position.Z) * baseCameraZ + drawOrigin.Y);

        point3D[i].X = (int)point3D[i].X;
        point3D[i].Y = (int)point3D[i].Y;
    }
}

现在您需要做的就是返回渲染的位图。

请注意,此示例中的设计不一定是最好的,因为每个对象都会自行绘制并且不知道场景中的ZBuffer和其他对象,此示例使用Graphics s而不是所有坐标变量中的Rectangle bounds = getBounds(point3D); bounds.Width += drawOrigin.X; bounds.Height += drawOrigin.Y; Bitmap tmpBmp = new Bitmap(bounds.Width, bounds.Height); using (Graphics g = Graphics.FromImage(tmpBmp)) { //Back Face g.DrawLine(Pens.Black, point3D[0], point3D[1]); g.DrawLine(Pens.Black, point3D[1], point3D[2]); g.DrawLine(Pens.Black, point3D[2], point3D[3]); g.DrawLine(Pens.Black, point3D[3], point3D[0]); //Front Face g.DrawLine(Pens.Black, point3D[4], point3D[5]); g.DrawLine(Pens.Black, point3D[5], point3D[6]); g.DrawLine(Pens.Black, point3D[6], point3D[7]); g.DrawLine(Pens.Black, point3D[7], point3D[4]); //... Four more faces ... } s,这会让您失去很多准确性 - 使用3D渲染器时不应该这样做。

Here is a good source了解使用最佳做法在C#和C ++中学习3D渲染的基础知识。