框架中为蓝色时,图像处理失败。可能是什么原因?

时间:2019-03-24 16:41:15

标签: c++ opencv image-processing

我正在从事一个有关绿色道路车道检测的项目。管道涉及以下管道:

  • Denoise-> BGR2HSV-> HSV过滤器-> Canny边缘检测->作物到ROI->霍夫线检测->生产线

在大多数实时Raspberry Pi摄像机上,整个流程都能按预期工作。但是,如果相机框架中出现蓝色,则捕获会逐渐变得模糊(请参阅GIF链接),最后,通过引发“浮点异常”来停止执行。到现在为止,我还不了解其背后的原因,因为它特定于蓝色。我尝试的是我只是禁用了行处理算法,并在Hough行检测器上完成了管道。刚观察到管道效果。模糊一直在发生,但是“浮点异常”没有出现。此外,我尝试在Ubuntu 18.04中进行处理,但是对已经录制的视频进行处理。当我逐帧观察过程时,蓝色没有引起任何问题。

您能帮我指出问题吗?我希望我能说清楚。

GDB输出:接收信号SIGFPE,算术异常。 __GI_raise(sig =)位于../sysdeps/unix/sysv/linux/raise.c:没有此类文件或目录。

p.s。我在C ++中使用OpenCV 4.0。

The explanatory GIF

原始图像如下所示: [1]] 1

帧中蓝色对象之后的失真图像:enter image description here] 2

绿色的HSV过滤器参数:

  • H [62,90],S [148,255],V [131,206]

代码段:

while (true) {

        timeCapture = (double) cv::getTickCount(); // capture the starting time

        cap >> frame_orig;

        if (frame_counter != 2){
            frame_counter++;
            }
        else {
            frame_counter = 0;
        // check if the input video can be opened
        if (frame_orig.empty()) {
            std::cout << "!!! Input video could not be opened" << std::endl;
            return -1;
        }
        avgCounter++; // increment the process counter
        frameHeight = frame_orig.rows;
        frameWidth = frame_orig.cols;

        // denoise the frame using a Gaussian filter
        img_denoise = lanedetector.deNoise(frame_orig);

        // convert from BGR to HSV colorspace
        cv::cvtColor(img_denoise, frame_HSV, cv::COLOR_BGR2HSV);

        // apply color thresholding HSV range for green color
        cv::inRange(frame_HSV, cv::Scalar(low_H, low_S, low_V),
                cv::Scalar(high_H, high_S, high_V), frame_threshed);

        // canny edge detection to the color thresholded image
        // (50,200,3)
        Canny(frame_threshed, frame_cannied, 133, 400, 5, true);

        // copy cannied image
        cv::cvtColor(frame_cannied, frame_houghP, cv::COLOR_GRAY2BGR);

//      std::ofstream myfile;
//      myfile.open("test.txt", std::ios_base::app);

        frame_masked = lanedetector.cropROI(frame_cannied);
        // runs the line detection
        std::vector<cv::Vec4i> line;
        HoughLinesP(frame_masked, lines_houghP, 1, CV_PI / 180, threshold,
                (double) maxLineGap, (double) minLineLength);
        if (!lines_houghP.empty()) {
            // sort the found lines from smallest y to largest y coordinate
            quickSort(lines_houghP, 0, lines_houghP.size());
            // reverse the order largest y to smallest y coordinate
            reverseVector(lines_houghP);

            // Separate lines into left and right lines
            left_right_lines = lanedetector.lineSeparation(lines_houghP,
                    frame_masked);

            // Apply regression to obtain only one line for each side of the lane
            lane = lanedetector.regression(left_right_lines, frame_threshed);

            // Plot lane detection
            flag_plot = lanedetector.plotLane(frame_orig, lane);

        for (size_t i = 0; i < lines_houghP.size(); i++) {
            cv::Vec4i l = lines_houghP[i];
            if (red < 0)
                red = 155;
            if (green < 0)
                green = 55;
            cv::line(frame_houghP, cv::Point(l[0], l[1]), cv::Point(l[2], l[3]),
                    cv::Scalar(255, green, red), 3, cv::LINE_AA);
            red = red - 20;
            green = green - 20;
        }
        }
        //  std::cout << "xTrainData (python)  = " << std::endl << format(frame_houghP, Formatter::FMT_PYTHON) << std::endl << std::endl;

        // calculate the process time
        timeCapture = ((double) cv::getTickCount() - timeCapture)
                / cv::getTickFrequency() * 1000;
        if (avgCounter == fps) {
            std::cout
                    << "The average process time for each 30 frames in milliseconds:     "
                    << (avgRunTime / fps) << std::endl;
            avgCounter = 0;
            avgRunTime = 0;
        } else
            avgRunTime += timeCapture;

        //imshow(window_capture_name, frame_orig);
        imshow(window_lane_detected, frame_houghP);
        imshow(winodw_hsv_filtered, frame_threshed);
        imshow(window_canny_applied, frame_cannied);
        imshow(window_masked, frame_masked);
        imshow(window_vision, frame_orig);

        if (!writer.isOpened()) {
            std::cout << "Could not open the output video file for write\n";
            return -1;
        }
        writer.write(frame_orig);
        red = 250;
        green = 250;

        char key = (char) cv::waitKey(30);
        if (key == 'q' || key == 27) {
            break;
        }

        std::cin.get();
            }


    }

1 个答案:

答案 0 :(得分:1)

要回答我的问题,我发现问题与Raspberry Pi相机有关。它不是真正的Pi相机,而是克隆相机。当帧中有一个蓝色物体时,像素值将按照@alterigel指出的那样变化。在进行了几次测试以评估是否是软件错误后,我得出结论,这与相机硬件本身有关。