绘制calcOpticalFlow的输出特征向量时,跟踪不准确

时间:2013-07-03 08:04:54

标签: opencv opticalflow feature-tracking

我一直在尝试开发一个简单的功能跟踪程序。用户用鼠标勾勒出屏幕上的区域,并为该区域创建一个掩码并传递给goodFeaturesToTrack。然后,该功能找到的功能将在屏幕上绘制(由蓝色圆圈表示)。

接下来,我将函数返回的特征向量传递给calcOpticalFlowPyrLk,并在屏幕上绘制得到的点矢量(用绿色圆圈表示)。虽然程序正确跟踪流的方向,但由于某种原因,calcOpticalFlow功能输出的功能不与屏幕上对象的位置对齐。

我觉得这在我使用的逻辑中是一个小错误,但我似乎无法分解它,我真的很感激你们的一些帮助。

我已在下面发布了我的代码,我想为全局变量和凌乱的结构道歉。我现在只是在测试,并计划在我运行后立即清理并转换为OOP格式。

同样,我上传的YouTube视频的link也显示了我正在对抗的行为。

bool drawingBox = false;
bool destroyBox = false;
bool targetAcquired = false;
bool featuresFound = false;
CvRect box;
int boxCounter = 0;
cv::Point objectLocation;
cv::Mat prevFrame, nextFrame, prevFrame_1C, nextFrame_1C;
std::vector<cv::Point2f> originalFeatures, newFeatures, baseFeatures;
std::vector<uchar> opticalFlowFeatures;
std::vector<float> opticalFlowFeaturesError;
cv::TermCriteria opticalFlowTermination = cv::TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3);
cv::Mat mask;
cv::Mat clearMask;

long currentFrame = 0;

void draw(cv::Mat image, CvRect rectangle)
{

    if (drawingBox)
    {
        cv::rectangle(image, cv::Point(box.x, box.y), cv::Point(box.x + box.width, box.y + box.height), cv::Scalar(225, 238 , 81), 2);
        CvRect rectangle2 = cvRect(box.x, box.y, box.width, box.height);
    }

    if (featuresFound)
    {   
        for (int i = 0; i < originalFeatures.size(); i++)
        {
            cv::circle(image, baseFeatures[i], 4, cv::Scalar(255, 0, 0), 1, 8, 0);
            cv::circle(image, newFeatures[i], 4, cv::Scalar(0, 255, 0),1, 8, 0);
            cv::line(image, baseFeatures[i], newFeatures[i], cv::Scalar(255, 0, 0), 2, CV_AA);
        }
    }
}

void findFeatures(cv::Mat mask)
{
    if (!featuresFound && targetAcquired)
    {
        cv::goodFeaturesToTrack(prevFrame_1C, baseFeatures, 200, 0.1, 0.1, mask);
        originalFeatures= baseFeatures;
        featuresFound = true;
        std::cout << "Number of Corners Detected: " << originalFeatures.size() << std::endl;

        for(int i = 0; i < originalFeatures.size(); i++)
        {
            std::cout << "Corner Location " << i << ": " << originalFeatures[i].x << "," << originalFeatures[i].y << std::endl;
        }
    }
}


void trackFeatures()
{
    cv::calcOpticalFlowPyrLK(prevFrame_1C, nextFrame_1C, originalFeatures, newFeatures, opticalFlowFeatures, opticalFlowFeaturesError, cv::Size(30,30), 5, opticalFlowTermination);
    originalFeatures = newFeatures;

}

void mouseCallback(int event, int x, int y, int flags, void *param)
{
    cv::Mat frame;
    frame = *((cv::Mat*)param);

    switch(event)
    {
    case CV_EVENT_MOUSEMOVE:
        {
            if(drawingBox)
            {
                box.width = x-box.x;
                box.height = y-box.y;
            }
        }
        break;

    case CV_EVENT_LBUTTONDOWN:
        {
            drawingBox = true;
            box = cvRect (x, y, 0, 0);
            targetAcquired = false;
            cv::destroyWindow("Selection");
        }
        break;

    case CV_EVENT_LBUTTONUP:
        {
            drawingBox = false;
            featuresFound = false;
            boxCounter++;
            std::cout << "Box " << boxCounter << std::endl;
            std::cout << "Box Coordinates: " << box.x << "," << box.y << std::endl;
            std::cout << "Box Height: " << box.height << std::endl;
            std::cout << "Box Width: " << box.width << std:: endl << std::endl;

            if(box.width < 0)
            {
                box.x += box.width;
                box.width *= -1;
            }

            if(box.height < 0)
            {
                box.y +=box.height;
                box.height *= -1;
            }

            objectLocation.x = box.x;
            objectLocation.y = box.y;
            targetAcquired = true;

        }
        break;

    case CV_EVENT_RBUTTONUP:
        {
            destroyBox = true;
        }
        break;
    }
}

int main ()
{
    const char *name = "Boundary Box";
    cv::namedWindow(name);

    cv::VideoCapture camera;
    cv::Mat cameraFrame;
    int cameraNumber = 0;
    camera.open(cameraNumber);

    camera >> cameraFrame;

    cv::Mat mask = cv::Mat::zeros(cameraFrame.size(), CV_8UC1);
    cv::Mat clearMask = cv::Mat::zeros(cameraFrame.size(), CV_8UC1);

    if (!camera.isOpened())
    {
        std::cerr << "ERROR: Could not access the camera or video!" << std::endl;
    }

    cv::setMouseCallback(name, mouseCallback,  &cameraFrame);

    while(true)
    {

        if (destroyBox)
        {
            cv::destroyAllWindows();
            break;
        }

        camera >> cameraFrame;

        if (cameraFrame.empty())
        {
            std::cerr << "ERROR: Could not grab a camera frame." << std::endl;
            exit(1);
        }

        camera.set(CV_CAP_PROP_POS_FRAMES, currentFrame);
        camera >> prevFrame;
        cv::cvtColor(prevFrame, prevFrame_1C, cv::COLOR_BGR2GRAY);

        camera.set(CV_CAP_PROP_POS_FRAMES, currentFrame ++);
        camera >> nextFrame;
        cv::cvtColor(nextFrame, nextFrame_1C, cv::COLOR_BGR2GRAY);

        if (targetAcquired)
        {
            cv::Mat roi (mask, cv::Rect(box.x, box.y, box.width, box.height));
            roi = cv::Scalar(255, 255, 255);
            findFeatures(mask);
            clearMask.copyTo(mask);
            trackFeatures();
        }

        draw(cameraFrame, box);
        cv::imshow(name, cameraFrame);
        cv::waitKey(20);
    }

    cv::destroyWindow(name);
    return 0;
}

1 个答案:

答案 0 :(得分:1)

在我看来,您无法在网络摄像头上使用camera.set(CV_CAP_PROP_POS_FRAMES, currentFrame),但我对此并不乐观。

相反,我建议您将前一帧保存在prevFrame变量中。

作为一个例子,我可以建议你这个工作代码,我只在while循环中更改,并在我添加之前添加注释:

while(true)
{

    if (destroyBox)
    {
        cv::destroyAllWindows();
        break;
    }

    camera >> cameraFrame;

    if (cameraFrame.empty())
    {
        std::cerr << "ERROR: Could not grab a camera frame." << std::endl;
        exit(1);
    }

    // new lines
    if(prevFrame.empty()){
            prevFrame = cameraFrame;
            continue;
    }
    // end new lines

    //camera.set(CV_CAP_PROP_POS_FRAMES, currentFrame);
    //camera >> prevFrame;
    cv::cvtColor(prevFrame, prevFrame_1C, cv::COLOR_BGR2GRAY);

    //camera.set(CV_CAP_PROP_POS_FRAMES, currentFrame ++);
    //camera >> nextFrame;
    // new line
    nextFrame = cameraFrame;
    cv::cvtColor(nextFrame, nextFrame_1C, cv::COLOR_BGR2GRAY);

    if (targetAcquired)
    {
        cv::Mat roi (mask, cv::Rect(box.x, box.y, box.width, box.height));
        roi = cv::Scalar(255, 255, 255);
        findFeatures(mask);
        clearMask.copyTo(mask);
        trackFeatures();
    }

    draw(cameraFrame, box);
    cv::imshow(name, cameraFrame);
    cv::waitKey(20);

    // old = new
    // new line
    prevFrame = cameraFrame.clone();

}