我希望扭曲图像的各个部分,以将其投影到不均匀的表面上。最终,我想扭曲HERE看到的图像,有点像是在HERE项目的THIS中完成的。
我的问题是,当我将转换应用于图像的每个子部分时,事情并没有对齐
这是我完成变换然后缝合(将它们裁剪并粘贴到最终图像上的过程)。
这4个点用于将图像转换为具有相应原始4个点的图像。这是使用我的名为Perspective_transform()的函数完成的。
a。我取2组4分,并将其传递给M = cv2.getPerspectiveTransform(corners,newCorners)
b。然后我调用:warped = cv2.warpPerspective(roi,M,(width,height))
获取新的变形图像后,我使用蒙版将与之关联的ROI混合在一起:
a。这是通过函数quadr_croped()
完成的img0 = np.array(sct.grab(monitor))
clone = img0.copy()
total_height, total_width, channels = img0.shape
xSub =int (input("How many columns would you like to divide the screen in to? (integers only)"))
ySub =int (input("How many rows would you like to divide the screen in to? (integers only)"))
roi_width = float(total_width/xSub)
roi_height = float(total_height/ySub)
point_list = []
def Perspective_transform(image,roi,corners,newCorners,i = -1):
corners = list (corners)
newCorners = list (newCorners)
height, width, pixType = image.shape
corners = np.array([[corners[0][0],corners[0][1],corners[0][2],corners[0][3]]],np.float32)
newCorners = np.array([[newCorners[0][0],newCorners[0][1],newCorners[0][2],newCorners[0][3]]],np.float32)
M = cv2.getPerspectiveTransform(corners, newCorners)
#warped = cv2.warpPerspective(roi, M, (width, height), flags=cv2.INTER_LINEAR)
warped = cv2.warpPerspective(roi, M, (width, height))
return warped
def quadr_croped(mainImg,image,pts,i): #example
# mask defaulting to black for 3-channel and transparent for 4-channel
# (of course replace corners with yours)
mask = np.zeros(image.shape, dtype=np.uint8)
roi_corners = pts #np.array([[(10,10), (300,300), (10,300)]], dtype=np.int32)
# fill the ROI so it doesn't get wiped out when the mask is applied
channel_count = image.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,)*channel_count
cv2.fillConvexPoly(mask, roi_corners, ignore_mask_color)
# apply the mask
masked_image = cv2.bitwise_and(image, mask)
mainImg = cv2.bitwise_or(mainImg, mask)
mainImg = mainImg + masked_image
# cv2.imshow("debug: image, mainImg: " +str(i), mainImg)
return mainImg
def draw_quadr(img1):
#set up list for ROIquadrilateral == polygon with 4 sides
numb_ROI = xSub * ySub
skips =int((numb_ROI-1)/xSub)
numb_ROI = skips + numb_ROI
quadrilateral_list.clear()
for i in range(numb_ROI):
if not point_list[i][0] <= point_list[(i+xSub+2)][0]:
continue
vert_poly = np.array([[
point_list[i],
point_list[i+1],
point_list[i+xSub+2],
point_list[i+xSub+1]
]], dtype=np.int32)
verticesPoly_old = np.array([[
H_points_list[i],
H_points_list[i+1],
H_points_list[i+xSub+2],
H_points_list[i+xSub+1]
]], dtype=np.int32)
roi = img0.copy()
# cv2.imshow("debug: roi"+str(i), roi)
overlay = perspective_transform(
img1,
roi,
verticesPoly_old,
vert_poly,
i)
img1 = quadr_croped(img1,overlay,vert_poly,i)
cv2.polylines(img1,vert_poly,True,(255,255,0))
quadrilateral_list.append(vert_poly)
pt1 = point_list[i]
pt2 = point_list[i+xSub+2]
cntPt = (int( (pt1[0]+pt2[0])/2),int((pt1[1]+pt2[1])/2) )
cv2.putText(img1,str(len(quadrilateral_list)-1),cntPt,cv2.FONT_HERSHEY_SIMPLEX, 1,(0,255,0),2,cv2.LINE_AA)
#cv2.imshow(str(i), img1)
return img1
请仔细查看这些内容,因为它们可以很好地显示问题。
原始图像无失真
此图像相对于中心有一个左偏移(无y方向移动)
x方向失真图像的结果
此图像偏离中心向上(没有x方向移动)
y方向失真图像的结果
此图像与中心的上下偏移
x和y方向失真图像的结果
我是计算机视觉和stackoverflow的新手,我希望我已经包括了所有可以描述问题的信息,如果您需要其他帮助,请告诉我
答案 0 :(得分:1)
代码中肯定会存在一些错误,因为输出图像看上去不应该(或者可能不是)。但是由于透视变换的数学性质,您将永远无法获得想要的结果。即,因为它们是非线性的。您可以使矩形拐角重合,但是在拐角之间图像的缩放比例会不均匀,并且不能使分隔线的两侧的这些不均匀性相同。
但是您可以使用仿射变换来均匀缩放图像。这就保证了,如果直线上的两个点重合,则所有其他点也重合。唯一的问题是,仿射变换是使用三角形确定的,因此您需要将四边形拆分为三角形。例如。在以下代码中,将四边形的中心用作附加顶点,将每个四边形分为4个三角形。
import numpy as np
import matplotlib.pyplot as plt
import cv2
# generate a test image
im = np.full((400,600), 255, 'u1')
h, w = im.shape
for i in range(1, w//20):
im = cv2.line(im, (i*20, 0), (i*20, h), i*8)
for i in range(1, h//20):
im = cv2.line(im, (0, i*20), (w, i*20), i*10)
plt.figure(figsize=(w/30, h/30))
plt.imshow(im, 'gray')
plt.show()
# Number of grid cells
nx, ny = 3, 2
p0 = np.meshgrid(np.linspace(0, w-1, nx+1, dtype='f'), np.linspace(0, h-1, ny+1, dtype='f'))
print(np.vstack(p0))
p1 = [v.copy() for v in p0]
# Move the central points
p1[0][1,1] -= 30; p1[1][1,1] -= 40
p1[0][1,2] += 20; p1[1][1,2] += 10
print(np.vstack(p1))
# Set perspective = True to see what happens if we use perspective transform
perspective = False
im1 = np.zeros_like(im)
for i in range(nx):
for j in range(ny):
x0, y0 = p0[0][j,i], p0[1][j,i]
c0 = np.stack((p0[0][j:(j+2),i:(i+2)].ravel() - x0, p0[1][j:(j+2),i:(i+2)].ravel() - y0))
c1 = np.stack((p1[0][j:(j+2),i:(i+2)].ravel(), p1[1][j:(j+2),i:(i+2)].ravel()))
if perspective:
ic0 = np.round(c0).astype('i')
ic1 = np.round(c1).astype('i')
M = cv2.getPerspectiveTransform(c0.T, c1.T)
imw = cv2.warpPerspective(im[ic0[1,0]:ic0[1,3], ic0[0,0]:ic0[0,3]], M, (w, h))
im1 |= cv2.fillConvexPoly(np.zeros_like(im), ic1[:,[0,1,3,2]].T, 255) & imw
else:
c0 = np.append(c0, np.mean(c0, axis=1, keepdims=True), 1)
c1 = np.append(c1, np.mean(c1, axis=1, keepdims=True), 1)
ic0 = np.round(c0).astype('i')
ic1 = np.round(c1).astype('i')
for ind in ([0,1,4], [1,3,4], [3,2,4], [2,0,4]):
M = cv2.getAffineTransform(c0[:,ind].T, c1[:,ind].T)
imw = cv2.warpAffine(im[ic0[1,0]:ic0[1,3], ic0[0,0]:ic0[0,3]], M, (w, h))
im1 |= cv2.fillConvexPoly(np.zeros_like(im), ic1[:,ind].T, 255) & imw
plt.figure(figsize=(w/30, h/30))
plt.imshow(im1, 'gray')
plt.show()