我试图使用核心探测器获取图像上的文本区域。
- (NSArray *)detectWithImage:(UIImage *)img
{
// prepare CIImage
CIImage *image = [CIImage imageWithCGImage:img.CGImage];
// flip vertically
CIFilter *filter = [CIFilter filterWithName:@"CIAffineTransform"];
[filter setValue:image forKey:kCIInputImageKey];
CGAffineTransform t = CGAffineTransformMakeTranslation(0, CGRectGetHeight(image.extent));
t = CGAffineTransformScale(t, 1.0, -1.0);
[filter setValue:[NSValue valueWithCGAffineTransform:t] forKey:kCIInputTransformKey];
image = filter.outputImage;
// prepare CIDetector
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeText
context:nil
options:@{
CIDetectorAccuracy: CIDetectorAccuracyHigh}];
// retrive array of CITextFeature
NSArray *features = [detector featuresInImage:image
options:@{CIDetectorReturnSubFeatures: @YES}];
return features;
}
传递的图像是:
我从这张照片中得不到任何东西。我也尝试使用彩色图像,也没有翻转图像。
有人能指出我正确的方向吗?
谢谢!
答案 0 :(得分:1)
您应检查以确保传递到您的函数中的UIImage
和img.CGImage
不是nil
,因为其余代码似乎没问题,但翻转不是必要。例如:
UIImageView *imageView = [[UIImageView alloc] initWithImage: img];
CIImage *image = [CIImage imageWithCGImage:img.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeText
context:nil
options:@{
CIDetectorAccuracy: CIDetectorAccuracyHigh}];
// retrive array of CITextFeature
NSArray *features = [detector featuresInImage:image options:@{CIDetectorReturnSubFeatures: @YES}];
for(CITextFeature *feature in features) {
UIView *view = [[UIView alloc] initWithFrame: CGRectMake(feature.bounds.origin.x, image.size.height - fear.bounds.origin.y - feature.bounds.height, fear.bounds.width, feature.bounds.height)];
view.backgroundColor = [[UIColor redColor] colorWithAlphaComponent: 0.25];
[imageView addSubview: view];
}
红色突出显示表示从CIDetector