我在下面使用此代码。
function videoAnalysis()
foregroundDetector = vision.ForegroundDetector('NumGaussians', 3, 'NumTrainingFrames', 50);
[filename, pathname] = uigetfile( ...
{'*.*', 'All Files (*.*)'}, ...
'Select a video file');
videoReader = vision.VideoFileReader(fullfile(pathname,filename));
% fRate = videoReader.info.VideoFrameRate;
% disp(fRate);
for i = 1:150
frame = step(videoReader);
foreground = step(foregroundDetector, frame);
end
se = strel('square', 3);
filteredForeground = imopen(foreground, se);
blobAnalysis = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', false, 'CentroidOutputPort', false, ...
'MinimumBlobArea', 350 , 'MaximumBlobArea', 15000, ...
'MajorAxisLengthOutputPort' , true, 'MaximumCount', 5);
bbox = step(blobAnalysis, filteredForeground);
result = insertShape(frame, 'Rectangle', bbox, 'Color', 'green');
numCars = size(bbox, 1);
result = insertText(result, [10 10], numCars, 'BoxOpacity', 1, 'FontSize', 14);
videoPlayer = vision.VideoPlayer('Name', 'Detected Cars');
videoPlayer.Position(3:4) = [650,400]; % window size: [width, height]
% se = strel('square', 3); % morphological filter for noise removal
i = 0;
while ~isDone(videoReader)
% if ( mode(i,10) == 0 )
% disp(i);
% end
% i = i + 1;
frame = step(videoReader); % read the next video frame
% Detect the foreground in the current video frame
foreground = step(foregroundDetector, frame);
% Use morphological opening to remove noise in the foreground
filteredForeground = imopen(foreground, se);
% Detect the connected components with the specified minimum area, and
% compute their bounding boxes
bbox = step(blobAnalysis, filteredForeground);
% Draw bounding boxes around the detected cars
result = insertShape(frame, 'Rectangle', bbox, 'Color', 'blue');
% imshow(result);
%
% Display the number of cars found in the video frame
numCars = size(bbox, 1);
result = insertText(result, [10 10], numCars, 'BoxOpacity', 1, 'FontSize', 14);
%if there is moving object
if numCars == 0
result = insertText(result, [100 20], 'You can pass now', 'BoxOpacity', 1, 'FontSize', 14, 'BoxColor', 'green');
else
result = insertText(result, [100 20], 'Please wait', 'BoxOpacity', 1, 'FontSize', 14, 'BoxColor', 'red');
end
step(videoPlayer, result); % display
end
release(videoReader); % close
end
基本上,代码会查找视频并检测视频帧之间的移动对象,并围绕更改的像素绘制边界框。我需要这些边界框的中心和区域信息。为此,我假设我必须将blobAnalysis中的AreaOutputPort和CentroidOutputPort参数更改为TRUE。但是,如果我这样做,Matlab会给出错误: POSITION矩阵必须有四列用于形状矩形。 我怎样才能获得这些价值?
谢谢。
答案 0 :(得分:1)
如果您将AreaOutputPort
和CentroidOutputPort
设置为true
,则会获得三个输出而不是一个输出。
而不是
bbox = step(blobAnalysis, filteredForeground);
使用
[areas, centroids, bbox] = step(blobAnalysis, filteredForeground);
您目前的方式bbox
最终成为包含区域的一维数组,这就是insertShape
抛出错误的原因。
答案 1 :(得分:0)
我找到了一种计算面积和质心的新方法:
for i=1:size(bbox, 1)
centroid(i,:) = [ bbox(i,1)+bbox(i,3)/2 ; bbox(i,2)+bbox(i,4)/2 ];
area(i,1) = bbox(i,3)*bbox(i,4);
end