我认为最容易解释图像问题:
我有两个放在桌子上的立方体(大小相同)。其中一面标有绿色(便于跟踪)。我想以立方体大小单位计算左立方体与右立方体(图片中的红线)的相对位置(x,y)。
甚至可能吗?我知道问题会很简单,如果这两个绿色边会有共同的平面 - 比如立方体的顶边,但我不能用它来跟踪。我只计算一个正方形的单应性并乘以其他立方角。
我应该旋转'单应矩阵乘以90deegre旋转矩阵得到地面'单应?我计划在智能手机场景中进行处理,因此可能陀螺仪,相机固有参数可以有任何价值。
答案 0 :(得分:0)
这是可能的。 让我们假设(或说明)该表是z = 0平面,并且您的第一个框位于该平面的原点。这意味着左框的绿色角具有(表格)坐标(0,0,0),(1,0,0),(0,0,1)和(1,0,1)。 (你的盒子大小为1)。 您还拥有这些点的像素坐标。如果你将这些2d和3d值(以及相机的内在函数和失真)提供给cv :: solvePnP,你可以得到相机的相对姿势到你的盒子(和平面)。
在下一步中,您必须将桌面与从相机中心通过第二个绿色框的右下角像素的光线相交。此交叉点看起来像(x,y,0),[x-1,y]将在框的右角之间进行平移。
答案 1 :(得分:0)
如果您拥有所有信息(相机内在函数),您可以按照FooBar的回答方式进行操作。
但你可以更直接地使用单应性(不需要计算射线等)来使用点位于平面上的信息:
计算图像平面和地平面之间的单应性。 遗憾的是,您需要4点对应,但图像中只能看到3个立方点,触及地平面。 相反,您可以使用立方体的顶平面,可以测量相同的距离。
首先是代码:
int main()
{
// calibrate plane distance for boxes
cv::Mat input = cv::imread("../inputData/BoxPlane.jpg");
// if we had 4 known points on the ground plane, we could use the ground plane but here we instead use the top plane
// points on real world plane: height = 1: // so it's not measured on the ground plane but on the "top plane" of the cube
std::vector<cv::Point2f> objectPoints;
objectPoints.push_back(cv::Point2f(0,0)); // top front
objectPoints.push_back(cv::Point2f(1,0)); // top right
objectPoints.push_back(cv::Point2f(0,1)); // top left
objectPoints.push_back(cv::Point2f(1,1)); // top back
// image points:
std::vector<cv::Point2f> imagePoints;
imagePoints.push_back(cv::Point2f(141,302));// top front
imagePoints.push_back(cv::Point2f(334,232));// top right
imagePoints.push_back(cv::Point2f(42,231)); // top left
imagePoints.push_back(cv::Point2f(223,177));// top back
cv::Point2f pointToMeasureInImage(741,200); // bottom right of second box
// for transform we need the point(s) to be in a vector
std::vector<cv::Point2f> sourcePoints;
sourcePoints.push_back(pointToMeasureInImage);
//sourcePoints.push_back(pointToMeasureInImage);
sourcePoints.push_back(cv::Point2f(718,141));
sourcePoints.push_back(imagePoints[0]);
// list with points that correspond to sourcePoints. This is not needed but used to create some ouput
std::vector<int> distMeasureIndices;
distMeasureIndices.push_back(1);
//distMeasureIndices.push_back(0);
distMeasureIndices.push_back(3);
distMeasureIndices.push_back(2);
// draw points for visualization
for(unsigned int i=0; i<imagePoints.size(); ++i)
{
cv::circle(input, imagePoints[i], 5, cv::Scalar(0,255,255));
}
//cv::circle(input, pointToMeasureInImage, 5, cv::Scalar(0,255,255));
//cv::line(input, imagePoints[1], pointToMeasureInImage, cv::Scalar(0,255,255), 2);
// compute the relation between the image plane and the real world top plane of the cubes
cv::Mat homography = cv::findHomography(imagePoints, objectPoints);
std::vector<cv::Point2f> destinationPoints;
cv::perspectiveTransform(sourcePoints, destinationPoints, homography);
// compute the distance between some defined points (here I use the input points but could be something else)
for(unsigned int i=0; i<sourcePoints.size(); ++i)
{
std::cout << "distance: " << cv::norm(destinationPoints[i] - objectPoints[distMeasureIndices[i]]) << std::endl;
cv::circle(input, sourcePoints[i], 5, cv::Scalar(0,255,255));
// draw the line which was measured
cv::line(input, imagePoints[distMeasureIndices[i]], sourcePoints[i], cv::Scalar(0,255,255), 2);
}
// just for fun, measure distances on the 2nd box:
float distOn2ndBox = cv::norm(destinationPoints[0]-destinationPoints[1]);
std::cout << "distance on 2nd box: " << distOn2ndBox << " which should be near 1.0" << std::endl;
cv::line(input, sourcePoints[0], sourcePoints[1], cv::Scalar(255,0,255), 2);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
这是我要解释的输出:
distance: 2.04674
distance: 2.82184
distance: 1
distance on 2nd box: 0.882265 which should be near 1.0
这些距离是:
1. the yellow bottom one from one box to the other
2. the yellow top one
3. the yellow one on the first box
4. the pink one
所以红线(你要求的)的长度几乎应该是2 x立方体边长。但是你可以看到我们有一些错误。
在单应性计算之前,您的像素位置越好/越正确,结果就越准确。
你需要一个针孔相机型号,这样可以不失真你的相机(在实际应用中)。
请记住,你可以计算地平面上的距离,如果你有4个线性点可见(不在同一条线上)!