背景场景经常随着时间的推移而发展,例如,照明条件可能会发生变化(例如,从日出到日落),或者因为可以在背景中添加或删除新对象。 因此,有必要动态构建背景场景的模型。 基于上面,我写了一个简单的帧差分代码。它运行良好但它很慢。 我怎样才能让它更快?有什么建议吗?
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/background_segm.hpp >
using namespace cv;
using namespace std;
#include <iostream>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>
int main()
{
cv::Mat gray; // current gray-level image
cv::Mat background; // accumulated background
cv::Mat backImage; // background image
cv::Mat foreground; // foreground image
// learning rate in background accumulation
double learningRate;
int threshold; // threshold for foreground extraction
cv::VideoCapture capture("video.mp4");
// check if video successfully opened
if (!capture.isOpened())
return 0;
// current video frame
cv::Mat frame;
double rate= capture.get(CV_CAP_PROP_FPS);
int delay= 1000/rate;
// foreground binary image
//cv::Mat foreground;
cv::Mat output;
bool stop(false);
while (!stop){
if(!capture.read(frame))
break;
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::namedWindow("back");
cv::imshow("back",gray);
// initialize background to 1st frame
if (background.empty())
gray.convertTo(background, CV_32F);
// convert background to 8U
background.convertTo(backImage,CV_8U);
// compute difference between image and background
cv::absdiff(backImage,gray,foreground);
// apply threshold to foreground image
cv::threshold(foreground,output, 10,255,cv::THRESH_BINARY_INV);
// accumulate background
cv::accumulateWeighted(gray, background, 0.01, output);
cv::namedWindow("out");
cv::imshow("out",output);
if (cv::waitKey(delay)>=0)
stop= true;
}
}
答案 0 :(得分:1)
我修改并更正了代码的某些部分:
cv::namedWindow("back")
和cv::namedWindow("out")
的while循环中,只需执行一次。if (background.empty())
查看数组是否为空,这对于矩阵background
为空的第一个循环是必要的,因为剩余的矩阵将被填充,因此考虑到迭代while循环中所需的类型和大小,您的代码不会在第一个周期初始化为零background=cv::Mat::zeros(rows,cols,CV_32F)
时出错。它也不会影响积累的运作。此处更新的代码:
int main()
{
cv::Mat gray; // current gray-level image
cv::Mat background; // accumulated background
cv::Mat backImage; // background image
cv::Mat foreground; // foreground image
// learning rate in background accumulation
double learningRate;
int threshold; // threshold for foreground extraction
cv::VideoCapture capture("C:/Users/Pedram91/Pictures/Camera Roll/videoplayback.mp4");////C:/Users/Pedram91/Downloads/Video/videoplayback.mp4//C:/FLIR.mp4
// check if video successfully opened
if (!capture.isOpened())
return 0;
// current video frame
cv::Mat frame;
double rate= capture.get(CV_CAP_PROP_FPS);
int delay= 1000/rate;
// foreground binary image
//cv::Mat foreground;
cv::Mat output;
bool stop(false);
cv::namedWindow("back");//This should go here,You only need to call once
cv::namedWindow("out");//This should go here,You only need to call once
int cols=capture.get(CV_CAP_PROP_FRAME_HEIGHT);
int rows=capture.get(CV_CAP_PROP_FRAME_WIDTH);
background=cv::Mat::zeros(rows,cols,CV_32F);//this will save the "if (background.empty())" in the while loop
while (!stop){
if(!capture.read(frame))
break;
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::imshow("back",gray);
// initialize background to 1st frame
// if (background.empty())
gray.convertTo(background, CV_32F);
// convert background to 8U
background.convertTo(backImage,CV_8U);
// compute difference between image and background
cv::absdiff(backImage,gray,foreground);
// apply threshold to foreground image
cv::threshold(foreground,output, 10,255,cv::THRESH_BINARY_INV);
// accumulate background
cv::accumulateWeighted(gray, background, 0.01, output);
cv::imshow("out",output);
if (cv::waitKey(delay)>=0)
stop= true;
}
}