我正在应用视角Core Image filter来转换并将CIImage绘制到自定义NSView
中,它似乎比我预期的要慢(例如,我拖动一个改变透视转换的滑块并且图形滞后于滑块值)。这是我的自定义drawRect
方法,其中self.mySourceImage
是CIImage
:
- (void)drawRect:(NSRect)dirtyRect {
[super drawRect:dirtyRect];
if (self.perspectiveFilter == nil)
self.perspectiveFilter = [CIFilter filterWithName:@"CIPerspectiveTransform"];
[self.perspectiveFilter setValue:self.mySourceImage
forKey:@"inputImage"];
[self.perspectiveFilter setValue: [CIVector vectorWithX:0 Y:0]
forKey:@"inputBottomLeft"];
// ... set other vector parameters based on slider value
CIImage *outputImage = [self.perspectiveFilter outputImage];
[outputImage drawInRect:dstrect
fromRect:srcRect
operation:NSCompositingOperationSourceOver
fraction:0.8];
}
以下是输出示例:
我对图像滤镜的体验告诉我,这应该快得多。是否有一些我最想要的“最佳实践”来加快速度?
CALayer
。我应该以某种方式将过滤器添加到CALayer
吗? CIContext
- 我假设NSView
使用了隐式上下文?我应该创建CIContext
并渲染图像并绘制图像吗?答案 0 :(得分:0)
有时我希望这样的事情没有限制(代码格式和字符数),但我希望这有用。以下是我在GLKView
中使用UIKit
的方式 - 希望应转换为macOS
上的内容....
我更喜欢子类化GLKView
以允许一些事情:
draw(rect:)
UIImageView
等同于contentMode
(特别是方面适合)scaleAspectFit
时,创建一个清晰的颜色"用于匹配周围超级视图的背景颜色那就是说,这就是我所拥有的。希望它有所帮助:
import GLKit
class ImageView: GLKView {
var renderContext: CIContext
var rgb:(Int?,Int?,Int?)!
var myClearColor:UIColor!
var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
var uiImage:UIImage? {
get {
let final = renderContext.createCGImage(self.image, from: self.image.extent)
return UIImage(cgImage: final!)
}
}
init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
self.translatesAutoresizingMaskIntoConstraints = false
}
override init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
self.translatesAutoresizingMaskIntoConstraints = false
}
required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
self.translatesAutoresizingMaskIntoConstraints = false
}
override func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
一些注意事项:
scaleAspectFill
代码和其他内容模式的GLKView subclass。CIContext
的单个renderContext
。我在需要时使用它来创建UIImage
(在iOS中你"分享"一个UIImage)。didSet
image
属性在图片发生变化时自动调用setNeedsDisplay
。 (当iOS设备改变方向时,我也会明确地调用它。)我不知道这次调用的macOS等效。我希望这为您在macOS中使用OpenGL提供了良好的开端。如果它与UIKit
类似,那么尝试将CIImage
放入NSView
并不会涉及GPU,这是一件坏事。如果这导致你误入歧途,请告诉我,我很乐意删除这个答案。祝你好运!