下面是我从某处下载的源代码,它能够检测到红色对象并显示其中心坐标。
a = imaqhwinfo;
[camera_name, camera_id, format] = getCameraInfo(a);
% Capture the video frames using the videoinput function
% You have to replace the resolution & your installed adaptor name.
vid = videoinput(camera_name, camera_id, format);
% Set the properties of the video object
set(vid, 'FramesPerTrigger', Inf);
set(vid, 'ReturnedColorspace', 'rgb')
vid.FrameGrabInterval = 1;
%start the video aquisition here
start(vid)
% Set a loop that stop after 100 frames of aquisition
while(vid.FramesAcquired<=100)
% Get the snapshot of the current frame
data = getsnapshot(vid);
% Now to track red objects in real time
% we have to subtract the red component
% from the grayscale image to extract the red components in the image.
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
%Use a median filter to filter out noise
diff_im = medfilt2(diff_im, [3 3]);
% Convert the resulting grayscale image into a binary image.
diff_im = im2bw(diff_im,0.17);
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,300);
% Label all the connected components in the image.
bw = bwlabel(diff_im, 8);
% Here we do the image blob analysis.
% We get a set of properties for each labeled region.
stats = regionprops(bw, 'BoundingBox', 'Centroid');
% Display the image
imshow(data)
hold on
%This is a loop to bound the red objects in a rectangular box.
for object = 1:length(stats)
bb = stats(object).BoundingBox;
bc = stats(object).Centroid;
rectangle('Position',bb,'EdgeColor','r','LineWidth',2)
plot(bc(1),bc(2), '-m+')
a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), 'Y: ', num2str(round(bc(2)))));
%disp(' X-Coordinate Y-cordinate')
%x=gallery('uniformdata',[5 3],0);
%disp(x)
set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow');
end
hold off
end
% Both the loops end here.
% Stop the video aquisition.
stop(vid);
% Flush all the image data stored in the memory buffer.
flushdata(vid);
% Clear all variables
% clear all
sprintf('%s','That was all about Image tracking, Guess that was pretty easy :) ')
问题是我想检测眼睛的瞳孔,所以我需要检测图像中的黑色,但我不知道如何修改代码以改变它能够检测到黑色。那么,对此有什么想法吗?请帮助我,谢谢大家。
答案 0 :(得分:4)
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
是算法提取颜色数据的红色分量的地方。 这就是你必须做出一些改变的地方。
您可以继续使用灰度,而不是提取红色组件(如代码注释中所指出的那样)。
diff_im = rgb2gray(data);
但我认为这会导致找到白色物体。要解决这个问题,您可以更改blob分析,或者只是反转输入。我认为它是这样的:
diff_im = imcomplement(rgb2gray(data));
我无法在此测试,因为我无法访问图像处理工具箱。你能为自己测试一下吗?
我用于测试的图片是here。
% Get the snapshot of the current frame
data = imread('child-eye1-560x372.jpg');
% Now to track red objects in real time we have to subtract the red component
% from the grayscale image to extract the red components in the image.
diff_im = rgb2gray(data);
imwrite(diff_im,'diff_im.jpg');
%Use a median filter to filter out noise
diff_im = medfilt2(diff_im, [3 3]);
imwrite(diff_im,'diff_im_filt1.jpg');
% Convert the resulting grayscale image into a binary image.
diff_im = im2bw(diff_im,0.17);
imwrite(diff_im,'diff_im_filt2.jpg');
这些只是过滤步骤,blob分析功能在八度音阶中不可用。生成的图像是:
如果我将im2bw
的过滤值降低到0.07,结果会更好:
如您所见,这部分过程似乎没问题。最后一个图像是二进制的,因此大的大blob不应该太难找到。和以前一样,我不能自己测试......
问题可能不在算法中,而是在您提供的数据中。如果图片中有许多小的黑色斑点,算法将找到所有这些并将其包含在其结果中。