从数据集中查找类似的图像

时间:2017-07-03 22:00:02

标签: python opencv image-processing machine-learning feature-detection

我正在使用opencv python教程中的功能点检测教程,如下所示:

def drawMatches(img1, kp1, img2, kp2, matches):

    # Create a new output image that concatenates the two images together
    # (a.k.a) a montage
    rows1 = img1.shape[0]
    cols1 = img1.shape[1]
    rows2 = img2.shape[0]
    cols2 = img2.shape[1]

    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')

    # Place the first image to the left
    out[:rows1,:cols1] = np.dstack([img1, img1, img1])

    # Place the next image to the right of it
    out[:rows2,cols1:] = np.dstack([img2, img2, img2])

    # For each pair of points we have between both images
    # draw circles, then connect a line between them
    for mat in matches:

        # Get the matching keypoints for each of the images
        img1_idx = mat.queryIdx
        img2_idx = mat.trainIdx

        # x - columns
        # y - rows
        (x1,y1) = kp1[img1_idx].pt
        (x2,y2) = kp2[img2_idx].pt

        # Draw a small circle at both co-ordinates
        # radius 4
        # colour blue
        # thickness = 1
        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)   
        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)

        # Draw a line in between the two points
        # thickness = 1
        # colour blue
        cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)


    # Show the image
    cv2.imshow('Matched Features', out)
    cv2.waitKey(0)
    cv2.destroyWindow('Matched Features')

    # Also return the image if you'd like a copy
    return out

def feature_matching():

    img1 = cv2.imread('image3.jpeg', 0)          
    img2 = cv2.imread('image2.jpeg', 0)


    # Initiate SIFT detector
    sift = cv2.SIFT()

    # find the keypoints and descriptors with SIFT
    kp1, des1 = sift.detectAndCompute(img1,None)
    kp2, des2 = sift.detectAndCompute(img2,None)

    # BFMatcher with default params
    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1,des2, k=2)

    # Apply ratio test
    good = []
    for m,n in matches:
        if m.distance < 0.75*n.distance:
            good.append(m)

    #gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
    #gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
    # cv2.drawMatchesKnn expects list of lists as matches.
    img3 = drawMatches(img1,kp1,img2,kp2,good)

    plt.imshow(img3),plt.show()

我有一个训练数据集,其中特定物体可以有3到4个不同方向的图像,照明等。

例如: XYZ_1 XYZ_2

我有一个测试数据集,其中包含来自训练数据集中的一个对象的图像(具有不同的方向,大小,角度等)&amp; n图像名称如[XYZ,ABC,DEF,..等],因此测试数据集就像(test_image, [XYZ, ABC, DEF, ..etc.])。因此,假设test_image是对象XYZ(训练集将该对象的图像名称设置为XYZ_1,XYZ_2,XYZ_3等),那么测试数据集的输出应为{{ 1}}。

如何使用特征点检测来做到这一点?有没有办法使用训练数据集创建训练模型,保存它然后能够在测试数据集上使用它?

任何帮助将不胜感激!

0 个答案:

没有答案