我使用Blender生成一些彩色图像及其相应的深度图,以及它们的相机属性(内在和外在)。
然后我想使用这些信息,使用2D到3D投影技术从这些2D图像生成3D点云。
我想拥有相机的旋转和平移矩阵。 我使用了@rfabbri编写的这个链接camera matrix for Blender中的代码,并使用了这个方法“get_3x4_RT_matrix_from_blender”来获得旋转矩阵。
之后我想用所有这些信息进行2D到3D投影。
对于2D到3D投影,我在Java中编写了以下代码:
static double[] projUVZtoXY( double u, double v, double d)
{
// "u" and "v" are the pixel number of 2D image and
// "d" is the depth of this pixel (distance of the point to camera)
double[] p = new double[]{u, v, 1};
double[] translate = calibStruct.getM_Trans(); // Translation Matrix, from **T_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] rotation = calibStruct.getM_RotMatrix(); // Rotation Matrix, from **R_world2cv** matrix in **get_3x4_RT_matrix_from_blender** method
double[] K = calibStruct.getM_K(); // Intrinsic Matrix, from K matrix in **get_calibration_matrix_K_from_blender** method
double[][] invertR = invert33(rotation); // This method give me R^-1 matrix
double[][] invertK = invert33(K); // This method give me K^-1 matrix
double[][] invK_mul_depth = multiply33_scalar(invertK, d); // this method multiply scalar value of "d" to "invertK" matrix
double[] invK_mul_depth_p = multiply33_31(invK_mul_depth, p); // this method multiply 3*3 matrix of "invK_mul_depth" by 3*1 matrix of "p"
// subtract translation Matrix from the "invK_mul_depth_p" matrix
double[] d_InvK_p_trans = new double[]{invK_mul_depth_p[0] - translate[0],
invK_mul_depth_p[1] - translate[1],
invK_mul_depth_p[2] - translate[2]};
double[] xyz = multiply33_31(invertR, d_InvK_p_trans );
return xyz;
}
以上所有代码都试图实现这种3D变形算法,将uv像素投影到XYZ 3D点。
但是当我生成3D点云时,它看起来像这样:(在Meshlab中)
我无法理解这里发生了什么。为什么在3D点云中,图像中的所有玩家都会重复排列!
有谁能猜出发生了什么?
我想也许我从Blender得到它的旋转矩阵是不正确的。你有什么想法?
谢谢,
Mozhde