在MATLAB上的帧上使用点特征匹配而不丢失RGB颜色的视频稳定

时间:2016-12-06 20:51:45

标签: matlab video-processing

我希望稳定由四轴飞行器在交通十字路口拍摄的13分钟视频,而不会丢失其3个颜色通道(RGB)。 Matlab自身的功能导致灰度视频,这是主要和未来目标,车辆跟踪的不需要的案例。赞赏新的想法。

下面你可以找到我自己的代码(工作并将视频转换为灰度),通过Matlab自己编写的脚本编辑,如下所示: Matlab's related Webpage : Video Stabilization Using Point Feature Matching

clc; clear all; close all;

filename = 'Quad_video_erst';
hVideoSrc = vision.VideoFileReader('Quad_video_erst.mp4', 'ImageColorSpace', 'Intensity');

% Create and open video file
myVideo = VideoWriter('vivi.avi');        
open(myVideo);
hVPlayer = vision.VideoPlayer;   

%% Step 1: Read Frames from a Movie File

for i=1:10 % testing for a short run 

    imgA = step(hVideoSrc); % Read first frame into imgA
    imgB = step(hVideoSrc); % Read second frame into imgB


%% Step 2: SURF DETECTION

pointsA=surf_function_CAN(imgA);
pointsB=surf_function_CAN(imgB);



%% Step 3. Select Correspondences Between Points
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);

indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);


%% Step 4: Estimating Transform from Noisy Correspondences
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
    pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);


%% Step 5: Step 5. Transform Approximation and Smoothing

% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)];...
  translation], [0 0 1]'];
tformsRT = affine2d(HsRt);

imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));


%% Write the Video
writeVideo(myVideo,imfuse(imgBold,imgBsRt,'ColorChannels','red-cyan'));


end

功能:

function [ surf_points ] = surf_function_CAN(img)


surfpoints_raw= detectSURFFeatures(img);
[featuresOriginal,  validPtsOriginal]  = extractFeatures(img,  surfpoints_raw);
strongestPoints = validPtsOriginal.selectStrongest(1600);

array=strongestPoints.Location;

% New - Get X and Y coordinates

X = array(:,1);
Y = array(:,2);

% New - Determine a mask to grab the points we want

ind = (((X>156-9-70 & X<156+9+70) & (Y>406-9-70 & Y<406+9+70)) | ...
((X>684-11-70 & X<684+11+70) & (Y>274-11-70 & Y<274+11+70)) | ...
((X>1066-15-70 & X<1066+15+70) & (Y>67-15-70 & Y<67+15+70)) | ...
((X>1559-15-70 & X<1559+15+70) & (Y>867-15-70 & Y<867+15+70)) | ...
((X>1082-18-70 & X<1082+18+70) & (Y>740-18-100 & Y<740+18+100)))  ;

% New - Create new SURFPoints structure that contains all information
% from the points we need

array_filtered =strongestPoints(ind);
surf_points= array_filtered;

end

1 个答案:

答案 0 :(得分:0)

首先,如果你看一下他们的例子,你应该使用他们执行循环的部分,而不是他们展示如何在2帧之间实现它的部分,因为它们不完全兼容。除此之外,您唯一需要做的就是对灰度图像执行分析,但在彩色图像上实现转换:

%% Load Video and Open Save File
filename = 'shaky_car.avi';
hVideoSrc = vision.VideoFileReader(filename);

myVideo = VideoWriter('vivi.avi');
open(myVideo);

% Get next Image
colorImg = step(hVideoSrc);
% Try to Convert to Grayscale
try
    imgB = rgb2gray(colorImg);
    RGB = true;
catch % Image is not RGB
    imgB = colorImg;
    RGB = false;
end

Hcumulative = eye(3);
ptThresh = 0.1;
% Loop Through Video
while ~isDone(hVideoSrc)
    imgA = imgB;
    % Get Next Image
    colorImg = step(hVideoSrc);
    % Convert to Grayscale
    if RGB
        imgB = rgb2gray(colorImg);
    else
        imgB = colorImg;
    end

    %% Calculate Transformation
    % Generate Prospective Points
    pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
    pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);

    % Extract Features for the Corners
    [featuresA, pointsA] = extractFeatures(imgA, pointsA);
    [featuresB, pointsB] = extractFeatures(imgB, pointsB);

    indexPairs = matchFeatures(featuresA, featuresB);
    pointsA = pointsA(indexPairs(:, 1), :);
    pointsB = pointsB(indexPairs(:, 2), :);

    [tform] = estimateGeometricTransform(pointsB, pointsA, 'affine');

    % Extract Rotation & Translations
    H = tform.T;
    R = H(1:2,1:2);

    theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);

    scale = mean(R([1 4])/cos(theta));

    translation = H(3, 1:2);

    % Reconstitute Trnasform
    HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
        translation], [0 0 1]'];
    Hcumulative = HsRt*Hcumulative;

    % Perform Transformation on Color Image
    img = imwarp(colorImg, affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));

    % Save Transformed Color Image to Video File
    writeVideo(myVideo,img)
end
close(myVideo)