我将2880 x 2560图像分成两个两个1440 x 2560图像。我一直试图使用CGImageForProposedRect
来执行此操作,但我不确定我是否正确接近它。这是我到目前为止(显示操场输出,末尾附加的代码):
但是如果你注意到,即使CGRects是1440x2560,leftImage
和rightImage
也不是。这不是CGImageForProposedRect
的工作方式吗?如果没有,为什么它需要一个CGRect参数?
import Cocoa
import AppKit
import CoreGraphics
let image = NSImage(named:"image")
if let image = image {
var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height)
var imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil)
var leftImageRect:CGRect = CGRectMake(0, 0, image.size.width/2.0, image.size.height)
var leftImageRef = image.CGImageForProposedRect(&leftImageRect, context: nil, hints: nil)
var leftImage = NSImage(CGImage:leftImageRef!.takeUnretainedValue(), size:NSZeroSize)
var rightImageRect:CGRect = CGRectMake(image.size.width/2.0, 0, image.size.width/2.0, image.size.height)
var rightImageRef = image.CGImageForProposedRect(&rightImageRect, context: nil, hints: nil)
var rightImage = NSImage(CGImage:rightImageRef!.takeUnretainedValue(), size:NSZeroSize)
}
答案 0 :(得分:1)
似乎正在替换
var leftImageRef = image.CGImageForProposedRect(&leftImageRect, context: nil, hints: nil)
var leftImage = NSImage(CGImage:leftImageRef!.takeUnretainedValue(), size:NSZeroSize)
带
var leftImageRef = CGImageCreateWithImageInRect(imageRef!.takeUnretainedValue(), leftImageRect)
var leftImage = NSImage(CGImage:leftImageRef, size:NSZeroSize)
解决了我的问题。但是,我仍然不确定为什么,所以如果有人有更好的解释,我可以选择它作为"正确答案"。
谢谢!