我使用以下方法将图像数据从相机转换为UIImage。为了节省一些时间和内存,我想在将图像数据转换为UIImage之前裁剪图像数据。
理想情况下,我传入cropRect,然后返回裁剪的UIImage。但是,由于根据我使用的是照片还是视频预设,相机输出的大小可能会有所不同,因此我可能不知道cropRect使用的尺寸。我可以使用cropRect,类似于焦点或曝光点,使用(0,0)和(1,1)之间的CGPoint,并对cropRect的CGSize执行类似操作。或者我可以在调用以下内容之前获取sampleBuffer的维度,并传入适当的cropRect。我想知道一些我应该使用的建议。
我也想知道如何最好地裁剪,以便不必创建一个完整的UIImage,然后将其裁剪下来。通常,我只对保留大约10-20%的像素感兴趣。我假设我必须迭代像素,并开始将cropRect复制到不同的像素缓冲区,直到我拥有我想要的所有像素。
请记住,根据orientation
可能会发生轮换。
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer orientation:(UIImageOrientation) orientation
{
// Create a UIImage from sample buffer data
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:(CGFloat)1.0 orientation:orientation];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
总结:
答案 0 :(得分:2)
我认为你应该使用pixel作为cropRect,因为你必须至少在某个时刻将float值转换为像素值。 以下代码未经过测试,但应该为您提供想法。
CGRect cropRect = CGRectMake(50, 50, 100, 100); // cropRect
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVReturn lock = CVPixelBufferLockBaseAddress(pixelBuffer, 0);
if (lock == kCVReturnSuccess) {
int w = 0;
int h = 0;
int r = 0;
int bytesPerPixel = 0;
unsigned char *buffer;
w = CVPixelBufferGetWidth(pixelBuffer);
h = CVPixelBufferGetHeight(pixelBuffer);
r = CVPixelBufferGetBytesPerRow(pixelBuffer);
bytesPerPixel = r/w;
buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(cropRect.size); // create context for image storage, use cropRect as size
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
// iterate over the pixels in cropRect
for(int y = cropRect.origin.y, yDest = 0; y<CGRectGetMaxY(cropRect); y++, yDest++) {
for(int x = cropRect.origin.x, xDest = 0; x<CGRectGetMaxX(cropRect); x++, xDest++) {
int offset = bytesPerPixel*((w*y)+x); // offset calculation in cropRect
int offsetDest = bytesPerPixel*((cropRect.size.width*yDest)+xDest); // offset calculation for destination image
for (int i = 0; i<bytesPerPixel; i++) {
data[offsetDest+i] = buffer[offset+i];
}
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}