我正在尝试使用MSER提取关键点并将SIFT用作功能描述符,然后匹配匹配的关键点。我使用Python执行了以下操作:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('1.jpg')
img2 = cv2.imread('2.jpg')
mser = cv2.MSER_create()
kp1 = mser.detect(img1)
kp2 = mser.detect(img2)
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1, kp1)
kp2, des2 = sift.detectAndCompute(img2, kp2)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
good = []
for m, n in matches:
if m.distance < 0.8 * n.distance:
good.append(m)
good = sorted(good, key=lambda x: x.distance)
print len(good)
matching_result = cv2.drawMatches(img1, kp1, img2, kp2, good, None, flags=2)
cv2.imwrite('result.jpg', matching_result)
但是,出现以下错误:
kp1, des1 = sift.detectAndCompute(img1, kp1)
TypeError: mask is not a numpy array, neither a scalar
如何解决此问题?而且,我使用检测器和描述符的方式正确吗?
答案 0 :(得分:0)
您可以基于现有关键点来计算描述符。一个快速的解决方法是从您的代码中更改以下行:
kp1, des1 = sift.detectAndCompute(img1, kp1)
kp2, des2 = sift.detectAndCompute(img2, kp2)
相反,使用compute()函数:
kp1, des1 = sift.compute(img1, kp1)
kp2, des2 = sift.compute(img2, kp2)