图像处理:使用固有矩阵将普通图片转换为FishEye图像

时间:2019-06-20 14:23:12

标签: python opencv image-processing computer-vision fisheye

我需要基于正常图片合成许多具有不同固有矩阵的FishEye图像。我正在遵循此paper中提到的方法。

理想情况下,如果算法正确,理想的鱼眼效果应如下所示:

ideal fish eye effect

但是当我使用算法转换图片时

originial image

看起来像这样

effect

所以下面是我的代码流程: 1.首先,我使用cv2读取原始图像

def read_img(image):

    img = ndimage.imread(image) #this would return a 4-d array: [R,G,B,255]
    img_shape = img.shape
    print(img_shape)

    #get the pixel coordinate
    w = img_shape[1] #the width
    # print(w)
    h= img_shape[0] #the height
    # print(h)
    uv_coord = []
    for u in range(w):
    for v in range(h):
        uv_coord.append([float(u),float(v)])  #this records the coord in the fashion of [x1,y1],[x1, y2], [x1, y3]....
    return np.array(uv_coord)

然后,基于纸张:

r(θ)=k1θ+k2θ^ 3 +k3θ^ 5 +k4θ^ 7,(1) 其中Ks是失真系数

鉴于针孔投影图像中的像素坐标(x,y),鱼眼中相应的图像坐标(x',y')可以计算为:

x'= r(θ)cos(ϕ),y'= r(θ)sin(ϕ),(2)

其中 ϕ = arctan((y-y0)/(x-x0)) ,而(x0,y0)是坐标中主点的坐标针孔投影图像。

然后将图像坐标(x',y')转换为像素坐标(xf,yf):(xf,yf): * xf = mu * x'+ u0,yf = mv * y'+ v0,* (3)

其中(u0,v0)是鱼眼中主要点的坐标,而mu,mv表示水平和垂直方向上每单位距离的像素数。因此,我猜想 本征矩阵 中只有[fx,fy]和u0 v0是[cx,cy]。

def add_distortion(sourceUV, dmatrix,Kmatrix):
    '''This function is programmed to remove the pixel of the given original image coords
    input arguments:
    dmatrix          -- the intrinsic matrix [k1,k2,k3,k4] for tweaking purposes
    Kmatrix          -- [fx, fy, cx, cy, s]'''
    u = sourceUV[:,0] #width in x
    v = sourceUV[:,1] #height in y

    rho = np.sqrt(u**2 + v**2) 

    #get theta
    theta = np.arctan(rho,np.full_like(u,1))

    # rho_mat = np.array([rho, rho**3, rho**5, rho**7])
    rho_mat = np.array([theta,theta**3, theta**5, theta**7])

    #get the: rho(theta) = k1*theta + k2*theta**3 + k3*theta**5 + k4*theta**7
    rho_d = dmatrix@rho_mat

    #get phi
    phi = np.arctan2((v - Kmatrix[3]), (u - Kmatrix[2]))
    xd = rho_d * np.cos(phi)
    yd = rho_d * np.sin(phi)

    #converting the coords from image plane back to pixel coords
    ud = Kmatrix[0] * (xd + Kmatrix[4] * yd) + Kmatrix[2]
    vd = Kmatrix[1] * yd + Kmatrix[3]
    return np.column_stack((ud,vd))

然后在获得变形的坐标之后,我以这种方式执行运动像素,我认为问题可能是:

def main():
    image_name = "original.png"
    img = cv2.imread(image_name)
    img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) #the cv2 read the image as BGR

    w = img.shape[1]
    h = img.shape[0]
    uv_coord = read_img(image_name)

    #for adding distortion
    dmatrix = [-0.391942708316175,0.012746418822063 ,-0.001374061848026 ,0.005349692659231]

    #the Intrinsic matrix of the original picture's 
    Kmatrix = np.array([9.842439e+02,9.808141e+02 , 1392/2, 2.331966e+02, 0.000000e+00])

    # Kmatrix = np.array([2234.23470710156  ,2223.78349134123,  947.511596277837,   647.103139639432,-3.20443253476976]) #the distorted intrinsics
    uv = add_distortion(uv_coord,dmatrix,Kmatrix)

    i = 0
    dstimg = np.zeros_like(img)

    for x in range(w):   #tthe coo
        for y in range(h):
           if i > (512 * 1392 -1):
               break

            xu = uv[i][0] #x, y1, y2, y3
            yu = uv[i][1]
            i +=1

            # if new pixel is in bounds copy from source pixel to destination pixel
            if 0 <= xu and xu < img.shape[1] and 0 <= yu and yu < img.shape[0]:
                dstimg[int(yu)][int(xu)] = img[int(y)][int(x)]

    img = Image.fromarray(dstimg, 'RGB')
    img.save('my.png')
    img.show()

但是,此代码无法按照我想要的方式执行。你们能帮我调试吗?我花了3天,但仍然看不到任何问题。谢谢!

0 个答案:

没有答案