我正在尝试将图像旋转一定角度然后在窗口中显示它。 我的想法是旋转,然后在一个新窗口中显示它,窗口的新宽度和高度根据旧的宽度和高度计算:
new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle
我期待结果如下所示:
但事实证明结果如下:
我的代码在这里:
#!/usr/bin/env python -tt
#coding:utf-8
import sys
import math
import cv2
import numpy as np
def rotateImage(image, angel):#parameter angel in degrees
if len(image.shape) > 2:#check colorspace
shape = image.shape[:2]
else:
shape = image.shape
image_center = tuple(np.array(shape)/2)#rotation center
radians = math.radians(angel)
x, y = im.shape
print 'x =',x
print 'y =',y
new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
new_x = int(new_x)
new_y = int(new_y)
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
print 'rot_mat =', rot_mat
result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
return result, new_x, new_y
def show_rotate(im, width, height):
# width = width/2
# height = height/2
# win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
# cv2.cv.ResizeWindow('ro_win', width, height)
win = cv2.namedWindow('ro_win')
cv2.imshow('ro_win', im)
if cv2.waitKey() == '\x1b':
cv2.destroyWindow('ro_win')
if __name__ == '__main__':
try:
im = cv2.imread(sys.argv[1],0)
except:
print '\n', "Can't open image, OpenCV or file missing."
sys.exit()
rot, width, height = rotateImage(im, 30.0)
print width, height
show_rotate(rot, width, height)
我的代码中肯定会有一些愚蠢的错误导致这个问题,但我无法弄明白...... 而且我知道我的代码不够pythonic :( ..RESry for that ..
任何人都可以帮助我吗?
最佳,
bearzk
答案 0 :(得分:10)
正如BloodyD的回答所说,cv2.warpAffine
不会自动居中转换的图像。相反,它只是使用变换矩阵变换每个像素。 (这可以将像素移动到笛卡尔空间中的任何位置,包括原始图像区域之外的任何位置。)然后,当您指定目标图像大小时,它会抓取该大小的区域,从(0,0)开始,即左上角原始框架。变换后的图像中不包含该区域的任何部分都将被截断。
这是旋转和缩放图像的Python代码,结果居中:
def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
(oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.
#choose a new image size.
newX,newY = oldX*scaleFactor,oldY*scaleFactor
#include this if you want to prevent corners being cut off
r = np.deg2rad(degreesCCW)
newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))
#the warpAffine function call, below, basically works like this:
# 1. apply the M transformation on each pixel of the original image
# 2. save everything that falls within the upper-left "dsize" portion of the resulting image.
#So I will find the translation that moves the result to the center of that region.
(tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
M[1,2] += ty
rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
return rotatedImg
答案 1 :(得分:4)
当你得到这样的旋转矩阵时:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
您的“缩放”参数设置为1.0,因此如果您使用它将图像矩阵转换为相同大小的结果矩阵,则必须剪切它。
您可以改为获得这样的旋转矩阵:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)
它会旋转和收缩,在边缘留下空间(你可以先将它放大,这样你最终仍会得到一个大图像)。
此外,看起来您对图像大小的numpy和OpenCV约定感到困惑。 OpenCV使用(x,y)表示图像大小和点坐标,而numpy使用(y,x)。这可能就是你从纵向到横向纵横比的原因。
我倾向于明确这样说:
imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)
等...
最终,这对我来说很好:
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
<强>更新强>
这是我执行的完整脚本。只是cv2.imshow(“winname”,image)和cv2.waitkey()没有参数来保持它打开:
import cv2
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)
cv2.imshow("Rotated", imageRotated)
cv2.waitKey()
那里真的不是很多......如果你正在使用的是一个真正的模块,你肯定是正确使用if __name__ == '__main__':
。
答案 2 :(得分:2)
嗯,这个问题似乎不是最新的,但我遇到了同样的问题并花了一段时间来解决它,而没有上下缩放原始图像。我将发布我的解决方案(不幸的是C ++代码,但如果需要它可以很容易地移植到python):
#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)
void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){
int w = src.size().width, h = src.size().height;
// resize the destination image
Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
dest = Mat(new_size, src.type());
// this is our rotation point
Size2d old_size = src.size();
Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);
// and this is the rotation matrix
// same as in the opencv docs, but in 3x3 form
double a = COS(angle), b = SIN(angle);
Mat rot_mat = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);
// next the translation matrix
double offsetx = (new_size.width - old_size.width) / 2,
offsety = (new_size.height - old_size.height) / 2;
Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);
// multiply them: we rotate first, then translate, so the order is important!
// inverse order, so that the transformations done right
Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);
// now just apply the affine transformation matrix
warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}
一般的解决方案是将旋转并将旋转的图片转换为正确的位置。因此,我们创建了两个转换矩阵(第一个用于旋转,第二个用于转换)并将它们乘以最终的仿射变换。由于opencv的getRotationMatrix2D返回的矩阵只有2x3,我必须手动创建3x3格式的矩阵,所以它们可以乘以。然后只需前两行并应用仿射变换。
编辑:我创建了一个Gist,因为我在不同的项目中经常需要这个功能。还有一个Python版本:https://gist.github.com/BloodyD/97917b79beb332a65758