OpenCV绘制非匹配点

时间:2017-05-04 06:52:39

标签: python opencv sift

我已经关注OpenCV Feature Detection and Description tutorial并在OpenCV中使用SIFT和其他算法来查找2个图像之间的匹配要素点。根据我的理解,这些算法可以找到2个图像之间的相似区域。但我有兴趣识别不同或不相似的区域。

如何在两幅图像上绘制所有非匹配特征点?此外,我可以在这些非匹配点周围绘制边界,以便能够显示2幅图像中哪些区域不同?

我在Windows 7上使用Python代码,并使用最新的OpenCV源代码构建。

2 个答案:

答案 0 :(得分:2)

  1. 在两张图片上绘制所有非匹配功能点:
  2. 一旦您知道由两个描述符(matches = bf.match(des1,des2))的匹配产生的Matcher objects结构,此任务就非常简单。与此问题相关的两个Matcher对象的属性如下:

    • DMatch.trainIdx :列车描述符中描述符的索引(或列车图像中的关键点
    • DMatch.queryIdx :查询描述符中描述符的索引(或查询图像中的关键点

    然后,知道这些信息,并且@uzair_syed说,这只是一个simple list operations task

    1. 围绕不匹配点绘制边界:
    2. 要做到这一点,我会做这样的事情:

      • 为每个非匹配点创建一个带有白色像素的黑色遮罩
      • 根据非匹配点群集的密度扩展具有大内核的掩码(即15 x 15 px)。
      • 侵蚀具有相同内核大小的掩码。
      • 最后,在蒙版上应用findContours函数以获取不匹配点的边界。

      有关详细信息,请查看此question and its answer

      希望它能让你走上正轨!

答案 1 :(得分:0)

原来是简单的列表操作任务。这是我的Python代码

# code copied from
# http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html

import numpy as np
import cv2
from matplotlib import pyplot as plt
from scipy.spatial.distance import euclidean

MIN_MATCH_COUNT = 10

img1 = cv2.imread('Src.png',0)  # queryImage
img2 = cv2.imread('Dest.png',0) # trainImage

# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)

flann = cv2.FlannBasedMatcher(index_params, search_params)

matches = flann.knnMatch(des1,des2,k=2)

# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
    if m.distance < 0.7*n.distance:
        good.append(m)

if len(good)>MIN_MATCH_COUNT:
    src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
    dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)

    kp1_matched=([ kp1[m.queryIdx] for m in good ])
    kp2_matched=([ kp2[m.trainIdx] for m in good ])

    kp1_miss_matched=[kp for kp in kp1 if kp not in kp1_matched]
    kp2_miss_matched=[kp for kp in kp2 if kp not in kp2_matched]

    # draw only miss matched or not matched keypoints location
    img1_miss_matched_kp = cv2.drawKeypoints(img1,kp1_miss_matched, None,color=(255,0,0), flags=0)
    plt.imshow(img1_miss_matched_kp),plt.show()

    img2_miss_matched_kp = cv2.drawKeypoints(img2,kp2_miss_matched, None,color=(255,0,0), flags=0)
    plt.imshow(img2_miss_matched_kp),plt.show()

    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
    matchesMask = mask.ravel().tolist()

    h,w = img1.shape
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)

else:
    print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
    matchesMask = None