我正在努力实现下一个目标:
1)检测图像上的特征点并将其保存到数组中
2)复制并旋转原始图像
3)检测旋转图像上的点
4)使用与原始图像相同的角度(矩阵)“旋转”(变换)原始图像检测点
5)使用旋转检查方法的可靠性(检查旋转图像的多少特征是否符合原始图像的变换特征)
实际上我的问题从第2步开始:当我试图将方形图像旋转-90角度时(顺便说一下,我的任务需要45)我得到一些黑色/褪色边框,产生的图像是202x203,而原来是201x201:
我用来旋转Mat的代码:
- (Mat)rotateImage:(Mat)imageMat angle:(double)angle {
// get rotation matrix for rotating the image around its center
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle
cv::Rect bbox = cv::RotatedRect(center,imageMat.size(), angle).boundingRect();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
cv::Mat dst;
cv::warpAffine(imageMat, dst, rot, bbox.size());
return dst;
}
来自https://stackoverflow.com/a/24352524/1286212
我也尝试了同样的结果:https://stackoverflow.com/a/29945433/1286212
关于点旋转的下一个问题,我正在使用此代码以相同的角度(-90)转换原始要素:
- (std::vector<cv::Point>)transformPoints:(std::vector<cv::Point>)featurePoints fromMat:(Mat)imageMat angle:(double)angle {
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
std::vector<cv::Point> dst;
cv::transform(featurePoints, dst, rot);
return dst;
}
由于错误的旋转图像,我无法确定它是否能像我预期的那样工作,我已经举了一个例子来说明我在说什么:
cv::Mat testMat(3, 3, CV_8UC3, cv::Scalar(255,0,0));
testMat.at<Vec3b>(cv::Point(0,1)) = Vec3b(0, 255, 0);
for(int i = 0; i < testMat.rows; i++) {
for(int j = 0; j < testMat.cols; j++) {
Vec3b color = testMat.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
std::vector<cv::Point> featurePoints1;
std::vector<cv::Point> featureRot;
cv::Point featurePoint = cv::Point( 0, 1 );
featurePoints1.push_back(featurePoint);
cv::Mat rotated = [self rotateImage:testMat angle:-90];
featureRot = [self transformPoints:featurePoints1 fromMat:testMat angle:90];
for(int i = 0; i < rotated.rows; i++) {
for(int j = 0; j < rotated.cols; j++) {
Vec3b color = rotated.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
两个垫子(testMat和旋转)必须是3x3,而第二个垫子是4x5。并且这个绿色像素应该从(0,1)转换为(1,2)旋转-90。但实际上使用transformPoints:fromMat:angle:
方法(1,3)(因为旋转图像中的尺寸错误,我猜)。以下是原始图像的日志:
Pixel (0, 0) color = (255, 0, 0)
Pixel (0, 1) color = (0, 255, 0)
Pixel (0, 2) color = (255, 0, 0)
Pixel (1, 0) color = (255, 0, 0)
Pixel (1, 1) color = (255, 0, 0)
Pixel (1, 2) color = (255, 0, 0)
Pixel (2, 0) color = (255, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
对于旋转的那个:
Pixel (0, 0) color = (0, 0, 0)
Pixel (0, 1) color = (0, 0, 0)
Pixel (0, 2) color = (0, 0, 0)
Pixel (0, 3) color = (0, 0, 0)
Pixel (0, 4) color = (255, 127, 0)
Pixel (1, 0) color = (0, 0, 0)
Pixel (1, 1) color = (0, 0, 0)
Pixel (1, 2) color = (0, 0, 0)
Pixel (1, 3) color = (0, 0, 0)
Pixel (1, 4) color = (0, 71, 16)
Pixel (2, 0) color = (128, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
Pixel (2, 3) color = (128, 0, 0)
Pixel (2, 4) color = (91, 16, 0)
Pixel (3, 0) color = (0, 128, 0)
Pixel (3, 1) color = (128, 128, 0)
Pixel (3, 2) color = (255, 0, 0)
Pixel (3, 3) color = (128, 0, 0)
Pixel (3, 4) color = (0, 0, 176)
正如您所见,像素颜色也已损坏。我做错了什么或误解了什么?
UPD已解决:
1)您应该使用boundingRect2f()
代替boundingRect()
,不要失去浮点精度并获得右边界框
2)您应该将您的中心设为cv::Point2f center(imageMat.cols/2.0f - 0.5f, imageMat.rows/2.0 - 0.5f)
以获取实际的像素索引中心(不知道为什么SO上的每个答案都有错误中心获得实施)
答案 0 :(得分:1)
使用boundingRect2f
代替boundingRect
。 boundingRect2f
使用整数值。它失去了精确度。