我捕获了视频,并在captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer
委托中对其进行了一些分析。但过一会儿就不会调用此方法。然后调用captureOutput:(AVCaptureOutput *)output didDropSampleBuffer
委托。
当我在didOutputSampleBuffer中什么也不做时,一切都很好。
我在此委托中运行张量流模型。这会导致问题。
问题:
问题在于,当调用didDropSampleBuffer
时,didOutputSampleBuffer不会再次调用。
我的解决方案:
我的解决方案是停止并启动avCaptureSession。但这导致额外的内存使用!最终导致我的应用崩溃。
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// ****** do heavy work in this delegate *********
graph = [TensorflowGraph new];
predictions = [graph runModelOnPixelBuffer:pixelBuffer orientation: UIDeviceOrientationPortrait CardRect: _vwRect];
}
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CFTypeRef droppedFrameReason = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_DroppedFrameReason, NULL);
NSLog(@"dropped frame, reason: %@", droppedFrameReason);
}
---->丢帧,原因:OutOfBuffers
根据[https://developer.apple.com/library/archive/technotes/tn2445/_index.html]:
此情况通常是由客户端保留缓冲区导致的 时间过长,可以通过将缓冲区返回到 提供者。
如何将缓冲区退还给提供者?
已编辑
执行CGImageRef cgImage = [context createCGImage:resized fromRect:resized.extent];
行11次后,将调用didDropSampleBuffer
。评论CFRelease(pixelBuffer)
的结果没有差异。这是否意味着不释放pixelBuffer?
CFRetain(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CIImage* ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
ciImage = [ciImage imageByCroppingToRect:cropRect];
CGAffineTransform transform = CGAffineTransformIdentity;
CGFloat angle = 0.0;
transform = CGAffineTransformRotate(transform, angle);
CIImage* resized = [ciImage imageByApplyingTransform:transform];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:resized fromRect:resized.extent]; // *********************************
UIImage* _res = [[UIImage alloc] initWithCGImage:cgImage];
CFRelease(pixelBuffer);