我尝试使用opencv中的triangulatePoints,但我认为我做错了(我在stackoverflow上读了一个关于triangulatePoints的问题但不是我理解的一切)。假设我有一个点坐标 - pt1和pt2对应于左右相机中一个点的坐标。 Pt1和pt2是cv :: Point。
所以我有:
cv::Mat cam0(3, 4, CV_64F, k_data1) //k_data1 is [R|t] 3x4 matrix for left camera
cv::Mat cam1(3, 4, CV_64F, k_data2) //k_data2 is [R|t] 3x4 matrix for right camera
cv::Point pt1; //for left camera
cv::Point pt2 //for right camera
我也定义了
cv::Mat pnt3D(1, 1, CV_64FC4).
我的问题是如何正确定义这两点(cv :: Point)?
我试过这样做:
cv::Mat_<cv::Point> cam0pnts;
cam0pnts.at<cv::Point>(0) = pt1;
cv::Mat_<cv::Point> cam1pnts;
cam1pnts.at<cv::Point>(0) = pt2;
但是app会抛出一些异常,所以也许我做错了。
好的,在@Optimus 1072的帮助下,我更正了一些代码行,我得到了这样的内容:
double pCam0[16], pCam1[16];
cv::Point pt1 = m_history.getPoint(0);
cv::Point pt2 = m_history.getPoint(1);
m_cam1.GetOpenglProjectionMatrix(pCam0, 640, 480);
m_cam2.GetOpenglProjectionMatrix(pCam1, 640, 480);
cv::Mat cam0(3, 4, CV_64F, pCam0);
cv::Mat cam1(3, 4, CV_64F, pCam1);
vector<cv::Point2f> pt1Vec;
vector<cv::Point2f> pt2Vec;
pt1Vec.push_back(pt1);
pt2Vec.push_back(pt2);
cv::Mat pnt3D(1,1, CV_64FC4);
cv::triangulatePoints(cam0, cam1, pt1Vec, pt2Vec, pnt3D);
但我仍然有例外:
... opencv \ opencv-2.4.0 \ opencv \ modules \ calib3d \ src \ triangulate.cpp:75:错误:( - 209)proj点坐标数必须是== 2
答案 0 :(得分:0)
我认为正确的方法是两个形成像这样的二维点的矢量
vector<Point2f> pt1;
vector<Point2f> pt2;
然后您可以在此向量中插入点
Point p;
p.x = x;
p.y = y;
pt1.push_back(p);
答案 1 :(得分:0)
最后这个有效:
cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);
int size0 = m_history.getHistorySize();
for(int i = 0; i < size0; i++)
{
cv::Point pt1 = m_history.getOriginalPoint(0, i);
cv::Point pt2 = m_history.getOriginalPoint(1, i);
pointsMat1.at<double>(0,0) = pt1.x;
pointsMat1.at<double>(1,0) = pt1.y;
pointsMat2.at<double>(0,0) = pt2.x;
pointsMat2.at<double>(1,0) = pt2.y;
cv::Mat pnts3D(4, 1, CV_64F);
cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}