我只是在OpenCV中做一个功能检测的例子。此示例如下所示。它给了我以下错误
模块'对象没有属性'drawMatches'
我检查了OpenCV文档,但我不确定为什么会收到此错误。有谁知道为什么?
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
错误:
Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'
答案 0 :(得分:78)
我也参加了派对,但我为Mac OS X安装了OpenCV 2.4.9,并且我的发行版中不存在drawMatches
功能。我也用find_obj
尝试了第二种方法,但这对我也没有用。有了这个,我决定编写自己的实现,尽我所能地模仿drawMatches
,这就是我所创造的。
我提供了自己的图像,其中一个是相机人,另一个是相同的图像,但逆时针旋转了55度。
我写的基础是我分配一个输出RGB图像,其中行数是两个图像的最大值,以适应将两个图像放在输出图像中,而列只是两者的总和列在一起。请注意,我假设两张图像都是灰度图像。
我将每个图像放在相应的位置,然后运行所有匹配关键点的循环。我提取两个图像之间匹配的关键点,然后提取它们的(x,y)
坐标。我在每个检测到的位置绘制圆圈,然后绘制一条连接这些圆圈的线。
请记住,第二张图像中检测到的关键点是相对于自己的坐标系。如果要将其放在最终输出图像中,则需要将列坐标偏移第一个图像的列数,以使列坐标相对于输出图像的坐标系。
没有进一步的麻烦:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
为了说明这是有效的,这里是我使用的两个图像:
我使用OpenCV的ORB检测器来检测关键点,并使用归一化的汉明距离作为相似性的距离度量,因为这是二进制描述符。就这样:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
这是我得到的图像:
knnMatch
cv2.BFMatcher
一起使用
如果您假设匹配项出现在1D列表中,我想记下上述代码的作用。但是,如果您决定使用knnMatch
中的cv2.BFMatcher
方法,则返回的是列表列表。具体来说,假设img1
中的描述符称为des1
,img2
中的描述符称为des2
,则从knnMatch
返回的列表中的每个元素都是k
的另一个列表来自des2
的{1}}匹配des1
中与每个描述符最接近的匹配。因此,knnMatch
输出中的第一个元素是来自k
的{{1}}匹配列表,它与des2
中找到的第一个描述符最相近。 des1
输出中的第二个元素是来自knnMatch
的{{1}}个匹配项列表,它们与k
中找到的第二个描述符最相近,依此类推。
为了最有道理des2
,您必须限制与des1
匹配的邻居总数。原因是因为你想使用至少两个匹配的点来验证匹配的质量,如果质量足够好,你会想要用这些来绘制你的匹配并在屏幕上显示它们。你可以使用一个非常简单的比率测试(信用额转到David Lowe),以确保从knnMatch
的第一个匹配点到k=2
中的描述符的距离与某个距离相比较来自des2
的第二个匹配点。因此,要将从des1
返回的内容转换为上面编写的代码所需的内容,请迭代匹配,使用上面的比率测试并检查它是否通过。如果是,请将第一个匹配的关键点添加到新列表中。
假设您创建了所有变量,就像在声明des2
实例之前所做的那样,您现在要执行此操作以调整knnMatch
方法以使用BFMatcher
:
knnMatch
我想将上述修改归因于用户@ryanmeasel,并且发现这些修改的答案在他的帖子中:OpenCV Python : No drawMatchesknn function。
答案 1 :(得分:18)
drawMatches
函数不是Python界面的一部分
正如您在docs中看到的那样,目前仅为C++
定义。
摘自文档:
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
如果函数有Python接口,你会发现类似这样的东西:
Python: cv2.drawMatches(img1, keypoints1, [...])
编辑
5个月前,实际上有一个commit引入了这个功能。但是,它还没有(在)官方文件中 确保您使用的是最新的OpenCV版本(2.4.7)。 为了完整起见,OpenCV 3.0.0的Functions接口看起来像this:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
答案 2 :(得分:16)
我知道这个问题有一个接受的答案是正确的,但是如果你使用的是OpenCV 2.4.8而不是3.0(-dev),解决方法可能是使用{{1}中包含的样本中的一些函数}
opencv\sources\samples\python2\find_obj
这是输出图像: