我遇到内存泄漏。根据探查器(带有分配),它发生在CIContext的createCGImage函数中。我一直在寻找关于stackoverflow的类似问题,但是我还没有找到解决方案。我试图将其包装在autoreleasepool中,但是仍然存在内存泄漏。
如何在不泄漏swift4中内存的情况下从CIContext创建CGImage?
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Only run when currentFrame is finished
guard self.currentPixelBuffer == nil else { return } // , case .normal = frame.camera.trackingState
self.currentPixelBuffer = frame.capturedImage
guard let currentPixelBuffer = self.currentPixelBuffer else {return }
let ciImage = CIImage(cvPixelBuffer: currentPixelBuffer).oriented(CGImagePropertyOrientation.init(UIDevice.current.orientation))
let cgImage: CGImage? = self.context?.createCGImage(ciImage, from: ciImage.extent)
// var cgImage: CGImage?
// autoreleasepool { [weak self] () -> () in
// cgImage = self?.context?.createCGImage(ciImage, from: ciImage.extent)
// }
guard let unwrappedCgImage = cgImage else { return }
let uiImage = UIImage.init(cgImage: unwrappedCgImage)
let visionImage = VisionImage(image: uiImage)
self.backgroundQueue.async {
self.textDetector?.detect(in: visionImage, completion: { [weak self] (features, error) in
...
P.s。这是我的上下文声明:
var context: CIContext? = CIContext.init(options: nil)
答案 0 :(得分:0)
因此,问题实际上出在“ self.textDetector?.detect(in:visionImage ...)”调用中。 它对visioningImage保持了强烈引用。
我无法解决该问题,但是我能够通过让VisionImage考虑到旋转而不是自己旋转图像来解决该问题。...
我最终得到了这个工作代码:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Only run when currentFrame is finished
guard self.currentPixelBuffer == nil else { return } // , case .normal = frame.camera.trackingState
self.currentPixelBuffer = frame.capturedImage
guard let currentPixelBuffer = self.currentPixelBuffer else { return }
let visionImage = VisionImage(buffer: self.getCMSampleBuffer(pixelBuffer: currentPixelBuffer))
let metadata = VisionImageMetadata()
switch UIApplication.shared.statusBarOrientation {
case .landscapeLeft:
metadata.orientation = .bottomRight
case .landscapeRight:
metadata.orientation = .topLeft
case .portrait:
metadata.orientation = .rightTop
case .portraitUpsideDown:
metadata.orientation = .leftBottom
default:
metadata.orientation = .topLeft
}
visionImage.metadata = metadata
self.backgroundQueue.async {
self.textDetector?.detect(in: visionImage, completion: { [weak self] (features, error) in