图像拼接

时间:2019-05-05 22:13:17

标签: python opencv image-processing

我在旋转瓶子的同时录制了视频。然后我从视频中获取了帧,并从所有图像中切出了中心块。

enter image description here

因此,对于所有帧,我都得到以下图像:
enter image description here

我尝试将它们拼接起来以获得全景,但结果却很差。 我使用了以下程序:

import glob

#rom panorama import Panorama
import sys
import numpy
import imutils
import cv2


def readImages(imageString):
    images = []

    # Get images from arguments.
    for i in range(0, len(imageString)):
        img = cv2.imread(imageString[i])
        images.append(img)

    return images


def findAndDescribeFeatures(image):
    # Getting gray image
    grayImage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

    # Find and describe the features.
    # Fast: sift = cv2.xfeatures2d.SURF_create()
    sift = cv2.xfeatures2d.SIFT_create()

    # Find interest points.
    keypoints = sift.detect(grayImage, None)

    # Computing features.
    keypoints, features = sift.compute(grayImage, keypoints)

    # Converting keypoints to numbers.
    keypoints = numpy.float32([kp.pt for kp in keypoints])

    return keypoints, features


def matchFeatures(featuresA, featuresB):
    # Slow: featureMatcher = cv2.DescriptorMatcher_create("BruteForce")
    featureMatcher = cv2.DescriptorMatcher_create("FlannBased")
    matches = featureMatcher.knnMatch(featuresA, featuresB, k=2)
    return matches


def generateHomography(allMatches, keypointsA, keypointsB, ratio, ransacRep):
    if not allMatches:
        return None
    matches = []

    for match in allMatches:
        # Lowe's ratio test
        if len(match) == 2 and (match[0].distance / match[1].distance) < ratio:
            matches.append(match[0])

    pointsA = numpy.float32([keypointsA[m.queryIdx] for m in matches])
    pointsB = numpy.float32([keypointsB[m.trainIdx] for m in matches])

    if len(pointsA) > 4:
        H, status = cv2.findHomography(pointsA, pointsB, cv2.RANSAC, ransacRep)
        return matches, H, status
    else:
        return None


paths = glob.glob("C:/Users/andre/Desktop/Panorama-master/frames/*.jpg")
images = readImages(paths[::-1])

while len(images) > 1:
    imgR = images.pop()
    imgL = images.pop()

    interestsR, featuresR = findAndDescribeFeatures(imgR)
    interestsL, featuresL = findAndDescribeFeatures(imgL)
    try:
        try:
            allMatches = matchFeatures(featuresR, featuresL)
            _, H, _ = generateHomography(allMatches, interestsR, interestsL, 0.75, 4.0)

            result = cv2.warpPerspective(imgR, H,
                                     (imgR.shape[1] + imgL.shape[1], imgR.shape[0]))
            result[0:imgL.shape[0], 0:imgL.shape[1]] = imgL
            images.append(result)
        except TypeError:
            pass
    except cv2.error:
        pass
result = imutils.resize(images[0], height=260)
cv2.imshow("Result", result)
cv2.imwrite("Result.jpg", result)

cv2.waitKey(0)

我的结果是:
enter image description here

也许有人知道会做得更好吗?我认为使用框架中的小块应该消除圆度...但是...

数据:https://1drv.ms/f/s!ArcAdXhy6TxPho0FLKxyRCL-808Y9g

1 个答案:

答案 0 :(得分:1)

我设法取得了不错的成绩。我只重写了一点代码,这是更改的部分:

def generateTransformation(allMatches, keypointsA, keypointsB, ratio):
    if not allMatches:
        return None
    matches = []

    for match in allMatches:
        # Lowe's ratio test
        if len(match) == 2 and (match[0].distance / match[1].distance) < ratio:
            matches.append(match[0])

    pointsA = numpy.float32([keypointsA[m.queryIdx] for m in matches])
    pointsB = numpy.float32([keypointsB[m.trainIdx] for m in matches])

    if len(pointsA) > 2:
        transformation = cv2.estimateRigidTransform(pointsA, pointsB, True)
        if transformation is None or transformation.shape[1] < 1 or transformation.shape[0] < 1:
            return None
        return transformation
    else:
        return None


paths = glob.glob("a*.jpg")
images = readImages(paths[::-1])
result = images[0]

while len(images) > 1:
    imgR = images.pop()
    imgL = images.pop()

    interestsR, featuresR = findAndDescribeFeatures(imgR)
    interestsL, featuresL = findAndDescribeFeatures(imgL)
    allMatches = matchFeatures(featuresR, featuresL)

    transformation = generateTransformation(allMatches, interestsR, interestsL, 0.75)
    if transformation is None or transformation[0, 2] < 0:
        images.append(imgR)
        continue
    transformation[0, 0] = 1
    transformation[1, 1] = 1
    transformation[0, 1] = 0
    transformation[1, 0] = 0
    transformation[1, 2] = 0
    result = cv2.warpAffine(imgR, transformation, (imgR.shape[1] + 
                int(transformation[0, 2] + 1), imgR.shape[0]))
    result[:, :imgL.shape[1]] = imgL
    cv2.imshow("R", result)
    images.append(result)
    cv2.waitKey(1)

cv2.imshow("Result", result)

因此,我更改的关键是图像的转换。我使用estimateRigidTransform()而不是findHomography()来计算图像的变换。从该变换矩阵中,我仅提取 x 坐标平移,该坐标平移在所得Affine Transformation matrix [0, 2]的{​​{1}}单元中。我将其他变换矩阵元素设置为好像是同一性变换(无缩放,无透视,无旋转或 y 平移)。然后,将其传递给transformation来转换warpAffine(),就像处理imgR一样。

之所以可以这样做,是因为您拥有稳定的相机位置和旋转的对象位置,并且可以以直的对象前视图进行捕获。这意味着您无需进行任何透视/缩放/旋转图像校正,只需通过 x 轴将它们“粘合”在一起即可。

我认为您的方法失败了,因为您实际上是通过稍微向下倾斜的相机视图来观察瓶子的,或者瓶子不在屏幕中间。我将尝试用图像来描述。我用红色描绘了瓶子上的一些文字。例如,该算法在捕获的圆形对象的底部找到一个匹配点对(绿色)。请注意,该点不仅向右移动,而且也向斜上方移动。然后,程序将考虑向上移动的点来计算转换。这种情况会越来越严重。

enter image description here

匹配图像点的识别也可能略有错误,因此仅提取 x 转换效果更好,因为您为算法提供了“线索”,以了解实际情况。这使得它不太适用于其他条件,但是对于您而言,它可以大大改善结果。

我还通过warpPerspective()检查滤除了一些不正确的结果(它只能旋转一个方向,并且如果该代码为负,则代码将不起作用)。