放大的MTKView在加入CIImages时显示出差距

时间:2018-10-09 16:44:28

标签: swift metal cifilter ciimage

我正在使用由西蒙·格拉德曼(Simon Gladman)编写的MTKView,它“公开图像属性类型为'CIImage',以简化基于金属的Core Image滤镜渲染”。它的性能已稍作更改。我没有进行额外的缩放操作,因为它与此处的问题无关。

问题::在将较小的CIImage合成为较大的CIImage时,它们的像素完美对齐。 MTKView的图像属性设置为此CIImage合成。但是,此图像有一个缩放比例,因此它适合整个MTKView,这使得可见合并图像之间的间隙。这是通过将drawableSize宽度/高度除以CIImage的范围宽度/高度来实现的。

这使我想知道是否需要在CIImage端进行某些操作以实际连接这些像素。将该CIImage保存到相机胶卷中,显示合并的图像之间没有分隔。仅当MTKView放大时才可见。此外,任何需要做的事情实际上都不会对性能产生任何影响,因为这些图像渲染是通过相机的输出实时完成的。 (MTKView是已完成效果的预览)

这是我用来渲染的MTKView:

class MetalImageView: MTKView
{
    let colorSpace = CGColorSpaceCreateDeviceRGB()

var textureCache: CVMetalTextureCache?

var sourceTexture: MTLTexture!

lazy var commandQueue: MTLCommandQueue =
    {
        [unowned self] in

        return self.device!.makeCommandQueue()
        }()!

lazy var ciContext: CIContext =
    {
        [unowned self] in

        //cacheIntermediates

        return CIContext(mtlDevice: self.device!, options:[.cacheIntermediates:false])
        //return CIContext(mtlDevice: self.device!)
        }()

override init(frame frameRect: CGRect, device: MTLDevice?)
{
    super.init(frame: frameRect,
               device: device ?? MTLCreateSystemDefaultDevice())



    if super.device == nil
    {
        fatalError("Device doesn't support Metal")
    }

    CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)

    framebufferOnly = false

    enableSetNeedsDisplay = true

    isPaused = true

    preferredFramesPerSecond = 30

}

required init(coder: NSCoder)
{
    fatalError("init(coder:) has not been implemented")
}

/// The image to display
var image: CIImage?
{
    didSet
    {
        //renderImage()
        //draw()
        setNeedsDisplay()
    }
}

override func draw(_ rect: CGRect)
{
    guard let
        image = image,
        let targetTexture = currentDrawable?.texture else
    {
        return
    }

    let commandBuffer = commandQueue.makeCommandBuffer()

    let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)

    let originX = image.extent.origin.x
    let originY = image.extent.origin.y

    let scaleX = drawableSize.width / image.extent.width
    let scaleY = drawableSize.height / image.extent.height
    let scale = min(scaleX, scaleY)
    let scaledImage = image
        .transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
        .transformed(by: CGAffineTransform(scaleX: scale, y: scale))

    ciContext.render(scaledImage,
                     to: targetTexture,
                     commandBuffer: commandBuffer,
                     bounds: bounds,
                     colorSpace: colorSpace)


    commandBuffer?.present(currentDrawable!)

    commandBuffer?.commit()

}

}

合成图像时,我以全尺寸相机图像作为背景,以此作为尺寸的基础,然后我使用CISourceAtopCompositing CIFilter在图像的宽度或高度中复制一半使用CGAffineTransform。我也给它一个负比例来增加镜面效果:

    var scaledImageTransform = CGAffineTransform.identity

    scaledImageTransform = scaledImageTransform.translatedBy(x:0, y:sourceCore.extent.height)

    scaledImageTransform = scaledImageTransform.scaledBy(x:1.0, y:-1.0)
    alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
                                                      parameters: [kCIInputImageKey: alphaMaskBlend2!,
                                                                   kCIInputBackgroundImageKey: sourceCore])

    alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
                                                      parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
                                                                   kCIInputBackgroundImageKey: alphaMaskBlend2!])

sourceCore是通过相机拍摄的原始图像。 alphaMaskBlend2是我将MTKView分配给的最终CIImage。 cropRect正确裁剪图像的镜像部分。在按比例放大的MTKView中,这两个连接的CIImage之间存在明显的间隙。无论MTKView像其他任何图像一样如何缩放,如何使该图像显示为连续像素?

0 个答案:

没有答案