OpenCV检测图像中隐藏的图案的角落

时间:2018-04-19 10:42:24

标签: ios objective-c opencv image-processing

我必须创建一个能够检测图像中隐藏(标准)图案的移动应用程序。

目的是检测角落并从图像中获取一些信息(如链接)。

我暂时专注于iOS,但我不知道如何实现模式并使用OpenCV识别它

所以第一个问题是:如何在图片中添加隐藏信息?

我发现this library实施了隐写术来隐藏图片中的某些信息。这是正确的方法吗?

下一步是用手机的相机检测图像及其角落。我的想法是创建一个标准模式(如点或线)以添加到.png图像上,并使用模板匹配在捕获期间检测模式所在的区域。但是在线阅读我已经看到这种技术对于这个问题并不是最好的。

我已在this tutorial之后成功实施了用于颜色跟踪的HSV对话,但我不知道如何继续下一步。

所以,第二个问题是:如何识别标准图案并检测相机拍摄的边框中的角落?

这是我用来将样本自助餐转换为UIImage的代码:

- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    uint8_t *yBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
    size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
    uint8_t *cbCrBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
    size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);

    int bytesPerPixel = 4;
    uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);

    for(int y = 0; y < height; y++) {
        uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
        uint8_t *yBufferLine = &yBuffer[y * yPitch];
        uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

        for(int x = 0; x < width; x++) {
            int16_t y = yBufferLine[x];
            int16_t cb = cbCrBufferLine[x & ~1] - 128;
            int16_t cr = cbCrBufferLine[x | 1] - 128;

            uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];

            int16_t r = (int16_t)roundf( y + cr *  1.4 );
            int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
            int16_t b = (int16_t)roundf( y + cb *  1.765);

            rgbOutput[0] = 0xff;
            rgbOutput[1] = clamp(b);
            rgbOutput[2] = clamp(g);
            rgbOutput[3] = clamp(r);
        }
    }

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(quartzImage);
    free(rgbBuffer);

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return image;
}

这适用于HSV对话:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
  fromConnection:(AVCaptureConnection *)connection {
  @autoreleasepool {
    if (self.isProcessingFrame) {
        return;
    }
    self.isProcessingFrame = YES;

    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    cv::Mat matFrame = [self cvMatFromUIImage:image];

    cv::cvtColor(matFrame, matFrame, CV_BGR2HSV);
    cv::inRange(matFrame, cv::Scalar(0, 100,100,0), cv::Scalar(10,255,255,0), matFrame);

    image = [self UIImageFromCVMat:matFrame];

    // Convert to base64
    NSData *imageData = UIImagePNGRepresentation(image);
    NSString *encodedString = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];

    self.isProcessingFrame = NO;
  }
}

我希望得到一些帮助,谢谢!

0 个答案:

没有答案