我想实现一个可识别照片文本的OCR应用程序。
我成功地在iOS中编译和集成了Tesseract Engine,我成功地在拍摄清晰文档(或从屏幕上拍摄此文本的照片)时获得合理的检测,但是对于其他文本,例如路标,商店标志,颜色背景,检测失败。
问题是需要什么样的图像处理准备才能获得更好的识别。例如,我希望我们需要将图像转换为灰度/ B& W以及固定对比度等。
如何在iOS中完成,是否有一个包?
答案 0 :(得分:15)
我目前正在做同样的事情。 我发现保存在photoshop中的PNG运行良好,但最初源自相机然后导入应用程序的图像从未起作用。 不要让我解释它 - 但应用这个功能使这些图像工作。也许它对你也有用。
// this does the trick to have tesseract accept the UIImage.
UIImage * gs_convert_image (UIImage * src_img) {
CGColorSpaceRef d_colorSpace = CGColorSpaceCreateDeviceRGB();
/*
* Note we specify 4 bytes per pixel here even though we ignore the
* alpha value; you can't specify 3 bytes per-pixel.
*/
size_t d_bytesPerRow = src_img.size.width * 4;
unsigned char * imgData = (unsigned char*)malloc(src_img.size.height*d_bytesPerRow);
CGContextRef context = CGBitmapContextCreate(imgData, src_img.size.width,
src_img.size.height,
8, d_bytesPerRow,
d_colorSpace,
kCGImageAlphaNoneSkipFirst);
UIGraphicsPushContext(context);
// These next two lines 'flip' the drawing so it doesn't appear upside-down.
CGContextTranslateCTM(context, 0.0, src_img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Use UIImage's drawInRect: instead of the CGContextDrawImage function, otherwise you'll have issues when the source image is in portrait orientation.
[src_img drawInRect:CGRectMake(0.0, 0.0, src_img.size.width, src_img.size.height)];
UIGraphicsPopContext();
/*
* At this point, we have the raw ARGB pixel data in the imgData buffer, so
* we can perform whatever image processing here.
*/
// After we've processed the raw data, turn it back into a UIImage instance.
CGImageRef new_img = CGBitmapContextCreateImage(context);
UIImage * convertedImage = [[UIImage alloc] initWithCGImage:
new_img];
CGImageRelease(new_img);
CGContextRelease(context);
CGColorSpaceRelease(d_colorSpace);
free(imgData);
return convertedImage;
}
我也做了很多实验,为tesseract准备图像。调整大小,转换为灰度,然后调整亮度和对比度似乎效果最佳。
我也尝试过这个GPUImage库。 https://github.com/BradLarson/GPUImage 并且GPUImageAverageLuminanceThresholdFilter似乎给了我一个很好的调整图像,但tesseract似乎不能很好地使用它。
我还将opencv放入我的项目中并计划尝试它的图像例程。可能甚至一些盒子检测找到文本区域(我希望这会加快tesseract)。
答案 1 :(得分:9)
我已经使用了上面的代码,但添加了另外两个函数调用来转换图像,以便它可以与Tesseract一起使用。
首先,我使用图像大小调整脚本转换为640 x 640,这对于Tesseract来说似乎更易于管理。
-(UIImage *)resizeImage:(UIImage *)image {
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGColorSpaceCreateDeviceRGB();
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
int width, height;
width = 640;//[image size].width;
height = 640;//[image size].height;
CGContextRef bitmap;
if (image.imageOrientation == UIImageOrientationUp | image.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, alphaInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, height, width, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, alphaInfo);
}
if (image.imageOrientation == UIImageOrientationLeft) {
NSLog(@"image orientation left");
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -height);
} else if (image.imageOrientation == UIImageOrientationRight) {
NSLog(@"image orientation right");
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -width, 0);
} else if (image.imageOrientation == UIImageOrientationUp) {
NSLog(@"image orientation up");
} else if (image.imageOrientation == UIImageOrientationDown) {
NSLog(@"image orientation down");
CGContextTranslateCTM (bitmap, width,height);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
因此弧度工作确保您将其声明在@implementation
static inline double radians (double degrees) {return degrees * M_PI/180;}
然后我转换为灰度。
我发现这篇文章Convert image to grayscale有关转换为灰度的信息。
我已成功使用此处的代码,现在可以阅读不同颜色的文字和不同颜色的背景
我稍微修改了代码,使其成为一个类中的函数而不是另一个人所做的自己的类
- (UIImage *) toGrayscale:(UIImage*)img
{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, img.size.width * img.scale, img.size.height * img.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [img CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:img.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}