如何在目标C,iOS中缩放/调整CVPixelBufferRef的大小

时间:2018-07-20 15:22:30

标签: ios objective-c iphone image

我正在尝试将图像大小从CVPixelBufferRef调整为299x299。 理想情况下也将裁剪图像。原始像素缓冲区为640x320,目标是将比例/裁剪到299x299,而不会失去宽高比(裁剪到中心)。

我找到了在目标c中调整UIImage大小的代码,但没有调整CVPixelBufferRef大小的代码。我发现对象C的各种非常复杂的示例有许多不同的图像类型,但是没有一个专门用于调整CVPixelBufferRef的大小。

最简单/最好的方法是什么,请提供确切的代码。

...我尝试了selton的答案,但这没有用,因为缩放后的缓冲区中的结果类型不正确(进入断言代码),

OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
  int doReverseChannels;
  if (kCVPixelFormatType_32ARGB == sourcePixelFormat) {
    doReverseChannels = 1;
  } else if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
    doReverseChannels = 0;
  } else {
    assert(false);  // Unknown source format
  }

4 个答案:

答案 0 :(得分:3)

CoreMLHelpers为灵感。我们可以创建一个C函数来满足您的需求。根据您的像素格式要求,我认为此解决方案将是最有效的选择。我使用AVCaputureVideoDataOutput进行测试。

我希望这会有所帮助!

AVCaptureVideoDataOutputSampleBufferDelegate实现。这里的大部分工作是创建一个中心裁剪矩形。使用AVMakeRectWithAspectRatioInsideRect是关键(它确实可以满足您的要求)。

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    if (pixelBuffer == NULL) { return; }

    size_t height = CVPixelBufferGetHeight(pixelBuffer);
    size_t width = CVPixelBufferGetWidth(pixelBuffer);

    CGRect videoRect = CGRectMake(0, 0, width, height);
    CGSize scaledSize = CGSizeMake(299, 299);

    // Create a rectangle that meets the output size's aspect ratio, centered in the original video frame
    CGRect centerCroppingRect = AVMakeRectWithAspectRatioInsideRect(scaledSize, videoRect);

    CVPixelBufferRef croppedAndScaled = createCroppedPixelBuffer(pixelBuffer, centerCroppingRect, scaledSize);

    // Do other things here
    // For example
    CIImage *image = [CIImage imageWithCVImageBuffer:croppedAndScaled];
    // End example

    CVPixelBufferRelease(croppedAndScaled);
}

方法1:数据操作和加速

此功能的基本前提是它首先裁剪为指定的矩形,然后缩放为最终所需的大小。通过简单地忽略矩形外部的数据即可实现裁剪。通过Accelerate的vImageScale_ARGB8888函数可实现缩放。再次感谢CoreMLHelpers的见识。

void assertCropAndScaleValid(CVPixelBufferRef pixelBuffer, CGRect cropRect, CGSize scaleSize) {
    CGFloat originalWidth = (CGFloat)CVPixelBufferGetWidth(pixelBuffer);
    CGFloat originalHeight = (CGFloat)CVPixelBufferGetHeight(pixelBuffer);

    assert(CGRectContainsRect(CGRectMake(0, 0, originalWidth, originalHeight), cropRect));
    assert(scaleSize.width > 0 && scaleSize.height > 0);
}

void pixelBufferReleaseCallBack(void *releaseRefCon, const void *baseAddress) {
    if (baseAddress != NULL) {
        free((void *)baseAddress);
    }
}

// Returns a CVPixelBufferRef with +1 retain count
CVPixelBufferRef createCroppedPixelBuffer(CVPixelBufferRef sourcePixelBuffer, CGRect croppingRect, CGSize scaledSize) {

    OSType inputPixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    assert(inputPixelFormat == kCVPixelFormatType_32BGRA
           || inputPixelFormat == kCVPixelFormatType_32ABGR
           || inputPixelFormat == kCVPixelFormatType_32ARGB
           || inputPixelFormat == kCVPixelFormatType_32RGBA);

    assertCropAndScaleValid(sourcePixelBuffer, croppingRect, scaledSize);

    if (CVPixelBufferLockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess) {
        NSLog(@"Could not lock base address");
        return nil;
    }

    void *sourceData = CVPixelBufferGetBaseAddress(sourcePixelBuffer);
    if (sourceData == NULL) {
        NSLog(@"Error: could not get pixel buffer base address");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    size_t sourceBytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixelBuffer);
    size_t offset = CGRectGetMinY(croppingRect) * sourceBytesPerRow + CGRectGetMinX(croppingRect) * 4;

    vImage_Buffer croppedvImageBuffer = {
        .data = ((char *)sourceData) + offset,
        .height = (vImagePixelCount)CGRectGetHeight(croppingRect),
        .width = (vImagePixelCount)CGRectGetWidth(croppingRect),
        .rowBytes = sourceBytesPerRow
    };

    size_t scaledBytesPerRow = scaledSize.width * 4;
    void *scaledData = malloc(scaledSize.height * scaledBytesPerRow);
    if (scaledData == NULL) {
        NSLog(@"Error: out of memory");
        CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);
        return nil;
    }

    vImage_Buffer scaledvImageBuffer = {
        .data = scaledData,
        .height = (vImagePixelCount)scaledSize.height,
        .width = (vImagePixelCount)scaledSize.width,
        .rowBytes = scaledBytesPerRow
    };

    /* The ARGB8888, ARGB16U, ARGB16S and ARGBFFFF functions work equally well on
     * other channel orderings of 4-channel images, such as RGBA or BGRA.*/
    vImage_Error error = vImageScale_ARGB8888(&croppedvImageBuffer, &scaledvImageBuffer, nil, 0);
    CVPixelBufferUnlockBaseAddress(sourcePixelBuffer, kCVPixelBufferLock_ReadOnly);

    if (error != kvImageNoError) {
        NSLog(@"Error: %ld", error);
        free(scaledData);
        return nil;
    }

    OSType pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer);
    CVPixelBufferRef outputPixelBuffer = NULL;
    CVReturn status = CVPixelBufferCreateWithBytes(nil, scaledSize.width, scaledSize.height, pixelFormat, scaledData, scaledBytesPerRow, pixelBufferReleaseCallBack, nil, nil, &outputPixelBuffer);

    if (status != kCVReturnSuccess) {
        NSLog(@"Error: could not create new pixel buffer");
        free(scaledData);
        return nil;
    }

    return outputPixelBuffer;
}

方法2:CoreImage

此方法更易于阅读,并且具有与您传入的像素缓冲区格式完全不可知的优势,这在某些使用情况下是一个加分项。当然,您仅限于CoreImage支持的格式。

CVPixelBufferRef createCroppedPixelBufferCoreImage(CVPixelBufferRef pixelBuffer,
                                                   CGRect cropRect,
                                                   CGSize scaleSize,
                                                   CIContext *context) {

    assertCropAndScaleValid(pixelBuffer, cropRect, scaleSize);

    CIImage *image = [CIImage imageWithCVImageBuffer:pixelBuffer];
    image = [image imageByCroppingToRect:cropRect];

    CGFloat scaleX = scaleSize.width / CGRectGetWidth(image.extent);
    CGFloat scaleY = scaleSize.height / CGRectGetHeight(image.extent);

    image = [image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];

    // Due to the way [CIContext:render:toCVPixelBuffer] works, we need to translate the image so the cropped section is at the origin
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-image.extent.origin.x, -image.extent.origin.y)];

    CVPixelBufferRef output = NULL;

    CVPixelBufferCreate(nil,
                        CGRectGetWidth(image.extent),
                        CGRectGetHeight(image.extent),
                        CVPixelBufferGetPixelFormatType(pixelBuffer),
                        nil,
                        &output);

    if (output != NULL) {
        [context render:image toCVPixelBuffer:output];
    }

    return output;
}

创建CIContext可以在呼叫站点完成,也可以创建并存储在属性中。有关选项的信息,请参见documentation

// Create a CIContext using default settings, this will
// typically use the GPU and Metal by default if supported
if (self.context == nil) {
    self.context = [CIContext context];
}

答案 1 :(得分:3)

    List<Integer[]> collect1 = new ArrayList<>();
    for (int i = 0; i < integers.length; i++){
        for (int j = 0; j < integers[i].length; j++){
            if (integers[i][j] == null){
                collect1.add(new Integer[]{i, j});
            }
        }
    }

    collect1.forEach(arr -> System.out.println("null at " +arr[0] + "," + arr[1]));

@allenh答案的快速版本

答案 2 :(得分:0)

您可以考虑使用CIImage

CIImage *image = [CIImage imageWithCVPixelBuffer:pxbuffer];
CIImage *scaledImage = [image imageByApplyingTransform:(CGAffineTransformMakeScale(0.1, 0.1))];
CVPixelBufferRef scaledBuf = [scaledImage pixelBuffer];

您应该更改比例以适合您的目标尺寸。

答案 3 :(得分:0)

步骤1

[CIImage imageWithCVPixelBuffer:开始将CVPixelBuffer转换为UIImage,然后使用标准方法将该CIImage转换为CGImage,然后将该CGImage转换为UIImage。

CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context
                   createCGImage:ciimage
                   fromRect:CGRectMake(0, 0, 
                          CVPixelBufferGetWidth(pixelBuffer),
                          CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiimage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);

步骤2

将图像放置在UIImageView中,以将图像缩放至所需大小/裁剪

UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*CGRect with new dimensions*/];
imageView.contentMode = /*UIViewContentMode with desired scaling/clipping style*/;
imageView.image = uiimage;

步骤3

使用类似以下内容的快照捕获所述imageView的CALayer:

#define snapshotOfView(__view) (\
(^UIImage *(void) {\
CGRect __rect = [__view bounds];\
UIGraphicsBeginImageContextWithOptions(__rect.size, /*(BOOL)Opaque*/, /*(float)scaleResolution*/);\
CGContextRef __context = UIGraphicsGetCurrentContext();\
[__view.layer renderInContext:__context];\
UIImage *__image = UIGraphicsGetImageFromCurrentImageContext();\
UIGraphicsEndImageContext();\
return __image;\
})()\
)

使用中:

uiimage = snapshotOfView(imageView);

步骤4

使用以下方法将上述UIImage快照图像(裁剪/缩放)转换回CVPixelBuffer:https://stackoverflow.com/a/34990820/2057171

也就是说,

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = @{
                              (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                              };

    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                        CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                        &pxbuffer);
    if (status!=kCVReturnSuccess) {
        NSLog(@"Operation failed");
    }
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                 CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
    CGContextConcatCTM(context, flipVertical);
    CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
    CGContextConcatCTM(context, flipHorizontal);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}

使用中:

pixelBuffer = [self pixelBufferFromCGImage:uiimage];