我一直在PyImageSearch.com使用精彩的教程来获得Pi(v3)来识别一些扑克牌。到目前为止,它一直在努力,但教程中描述的方法更多的是对于尖角矩形,当然扑克牌是圆角的。这意味着轮廓角最终被绘制为略微偏移到实际卡片,因此我得到的裁剪和去扭曲图像略微旋转,这略微抛出了镜头识别。 绿色轮廓是由OpenCV提供的,您可以看到与我绘制的红线相比,它标记了它偏移/旋转的实际边界。我的问题是;如何让它跟随那些红线,即检测边缘?
这是当前为获得该结果而运行的代码:
frame = vs.read()
frame = cv2.flip(frame, 1)
frame = imutils.resize(frame, width=640)
image = frame.copy() #copy frame so that we don't get funky contour problems when drawing contours directly onto the frame.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
edges = imutils.auto_canny(gray)
cv2.imshow("Edge map", edges)
#find contours in the edged image, keep only the largest
# ones, and initialize our screen contour
_, cnts, _ = cv2.findContours(edges.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:3]
screenCnt = None
# loop over our contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.05 * peri, True)
# if our approximated contour has four points, then
# we can assume that we have found our card
if len(approx) == 4:
screenCnt = approx
break
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 3)

答案 0 :(得分:0)
原来我只需要阅读OpenCV contour docs一点。我基本上寻找的是我的轮廓周围的最小区域框:
rect = cv2.minAreaRect(cnt) # get a rectangle rotated to have minimal area
box = cv2.boxPoints(rect) # get the box from the rectangle
box = np.int0(box) # the box is now the new contour.
就我而言,screenCnt
的所有实例现在都变为box
变量,其余代码将继续正常运行。