从MTLTexture(Swift,macOS)制作CGImage时发生内存泄漏

时间:2018-08-21 00:44:32

标签: swift macos cgcontext metal cgimage

我有一个Metal应用,正在尝试将帧导出到快速播放的电影中。我正在以超高分辨率渲染帧,然后在写入之前按比例缩小它们,以使场景抗锯齿。

要缩放它,我将高分辨率纹理转换为CGImage,然后调整图像大小并写出较小的版本。我在网上找到了这个扩展程序,用于将MTLTexture转换为CGImage:

extension MTLTexture {

func bytes() -> UnsafeMutableRawPointer {
    let width = self.width
    let height   = self.height
    let rowBytes = self.width * 4
    let p = malloc(width * height * 4)

    self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)

    return p!
}

func toImage() -> CGImage? {
    let p = bytes()

    let pColorSpace = CGColorSpaceCreateDeviceRGB()

    let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
    let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)

    let size = self.width * self.height * 4
    let rowBytes = self.width * 4

    let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
        // https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
        // N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
        return
    }
    if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {

        let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!

        return cgImageRef
    }
    return nil
}

}  // end extension

我不是很肯定,但是似乎此函数中的某些内容导致了内存泄漏-每帧它都保持巨型纹理/ cgimage中的内存量,而不释放它。

CGDataProvider初始化采用了“ releaseData”回调参数,但我的印象是不再需要它。

我还调整了CGImage的大小,这可能还会导致泄漏,我不知道。但是,我可以注释掉框架的大小调整和写入,并且仍然会累积内存泄漏,因此在我看来,转换为CGImage是主要问题。

extension CGImage {

func resize(_ scale:Float) -> CGImage? {

    let imageWidth = Float(width)
    let imageHeight = Float(height)

    let w = Int(imageWidth * scale)
    let h = Int(imageHeight * scale)

    guard let colorSpace = colorSpace else { return nil }
    guard let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bitsPerComponent, bytesPerRow: Int(Float(bytesPerRow)*scale), space: colorSpace, bitmapInfo: alphaInfo.rawValue) else { return nil }

    // draw image to context (resizing it)
    context.interpolationQuality = .high
    let r = CGRect(x: 0, y: 0, width: w, height: h)
    context.clear(r)
    context.draw(self, in:r)

    // extract resulting image from context
    return context.makeImage()

}
}

最后,这是导出时我调用每一帧的大函数。很抱歉,篇幅太长了,但是提供太多的信息总比提供少的信息更好。因此,基本上,在渲染开始时,我分配了一个巨大的MTL纹理(“ exportTextureBig”),正常屏幕的大小乘以每个方向的“ zoom_subvisions”。我以块为单位渲染场景,为网格上的每个点分配一个场景,然后使用blitCommandEncoder.copy()将每个小块复制到大纹理上,从而组装大框架。填满整个框架后,我尝试从中制作一个CGImage,将其缩小为另一个CGImage,然后将其写出。

我在导出时每帧调用commandBuffer.waitUntilCompleted()-希望避免渲染器保留其仍在使用的纹理。

func exportFrame2(_ commandBuffer:MTLCommandBuffer, _ texture:MTLTexture)  {  // texture is the offscreen render target for the screen-size chunks

    if zoom_index < zoom_subdivisions*zoom_subdivisions {  // copy screen-size chunk to large texture

        if let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder() {

            let dx = Int(BigRender.globals_L.displaySize.x) * (zoom_index%zoom_subdivisions)
            let dy = Int(BigRender.globals_L.displaySize.y) * (zoom_index/zoom_subdivisions)
            blitCommandEncoder.copy(from:texture,
                                    sourceSlice: 0,
                                    sourceLevel: 0,
                                    sourceOrigin: MTLOrigin(x:0,y:0,z:0),
                                    sourceSize: MTLSize(width:Int(BigRender.globals_L.displaySize.x),height:Int(BigRender.globals_L.displaySize.y), depth:1),
                                    to:BigVideoWriter!.exportTextureBig!,
                                    destinationSlice: 0,
                                    destinationLevel: 0,
                                    destinationOrigin: MTLOrigin(x:dx,y:dy,z:0))

            blitCommandEncoder.synchronize(resource: BigVideoWriter!.exportTextureBig!)
            blitCommandEncoder.endEncoding()
        }

    }


    commandBuffer.commit()
    commandBuffer.waitUntilCompleted() // do this instead

    // is big frame complete?
    if (zoom_index == zoom_subdivisions*zoom_subdivisions-1) {

        // shrink the big texture here

        if let cgImage = self.exportTextureBig!.toImage() {  // memory leak here?

            // this can be commented out and memory leak still happens
            if let smallImage = cgImage.resize(1.0/Float(zoom_subdivisions)) {
                writeFrame(nil, smallImage)
            }

        }

    }

}

除了巨大的内存泄漏,所有这些都有效。我可以做些什么使它每帧发布cgImage数据吗?为什么要坚持呢?

非常感谢您的任何建议!

2 个答案:

答案 0 :(得分:1)

我认为您误解了该问题,因为CGDataProviderReleaseDataCallbackCGDataProviderRelease()不可用。

CGDataProviderRelease()(在C语言中)用于释放CGDataProvider对象本身。但这与创建CGDataProvider时提供给字节缓冲区的字节缓冲区不同。

在Swift中,CGDataProvider对象的生存期由您管理,但这无助于释放字节缓冲区。

理想情况下,CGDataProvider能够自动管理字节缓冲区的生存期,但不能。 CGDataProvider不知道如何释放该字节缓冲区,因为它不知道如何分配它。因此,您必须提供一个可用于释放它的回调。您实质上是在提供有关如何释放字节缓冲区的知识。

由于您正在使用malloc()分配字节缓冲区,因此您的回调需要free()进行。

也就是说,使用CFMutableData而不是UnsafeMutableRawPointer会更好。然后,使用CGDataProvider(data:)创建数据提供者。在这种情况下,将为您管理所有内存。

答案 1 :(得分:0)

我使用了非常相似的代码,一旦我添加了释放 P 的代码,问题就解决了:

    func toImage() -> CGImage? {
    let p = bytes()
    
    let pColorSpace = CGColorSpaceCreateDeviceRGB()
    
    let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
    let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
    
    let size = self.width * self.height * 4
    let rowBytes = self.width * 4
    
    let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
        // https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
        // N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
        return
    }
    if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {

        let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
        p.deallocate() //this fixes the memory leak
        return cgImageRef
    }
    p.deallocate() //this fixes the memory leak, but the data provider is no longer available (you just deallocated it's backing store)
    return nil
}

任何需要快速使用 CGImage 的地方

autoreleasepool {
let lastDrawableDisplayed = self.metalView?.currentDrawable?.texture
let cgImage = lastDrawableDisplayed?.toImage() // your code to convert drawable to CGImage
// do work with cgImage
}