我从OpenCV源代码中取样并试图在iOS上使用它,我做了以下内容:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// get cv::Mat from CMSampleBufferRef
UIImage * img = [self imageFromSampleBuffer: sampleBuffer];
cv::Mat cvImg = [img CVGrayscaleMat];
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
cv::vector<cv::Rect> found;
hog.detectMultiScale(cvImg, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);
for( int i = 0; i < (int)found.size(); i++ )
{
cv::Rect r = found[i];
dispatch_async(dispatch_get_main_queue(), ^{
self.label.text = [NSString stringWithFormat:@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height];
});
NSLog(@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height);
}
}
其中CVGrayscaleMat是
-(cv::Mat)CVGrayscaleMat
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
和imageFromSampleBuffer是Apple的文档样本。事情是 - 应用程序无法检测到人,我尝试了不同的大小和姿势 - 没有什么对我有用。我错过了什么?
答案 0 :(得分:3)
我设法让它发挥作用。事实证明,CV_8UC1矩阵不是正确的,虽然openCV没有说明,当我将它传递给detectMultiScale方法时出现问题。 当我用
将CV_8UC4转换为CV_8UC3时-(cv::Mat) CVMat3Channels
{
cv::Mat rgbaMat = [self CVMat];
cv::Mat rgbMat(self.size.height, self.size.width, CV_8UC3); // 8 bits per component, 3 channels
cvtColor(rgbaMat, rgbMat, CV_RGBA2RGB, 3);
return rgbMat;
}
检测开始起作用。