2015-01-06 16 views
2

我将2880 x 2560图像分成两个两个1440 x 2560图像。我一直试图使用CGImageForProposedRect来做到这一点,但我不确定我是否正确接近它。以下是我迄今为止(对操场输出,代码附加在结尾):如何使用Swift为OSX分割图像

enter image description here

但如果你注意到,即使CGRects是1440x2560,leftImagerightImage都没有。这不是如何CGImageForProposedRect的作品?如果没有,为什么它需要一个CGRect参数?

import Cocoa 
import AppKit 
import CoreGraphics 

let image = NSImage(named:"image") 
if let image = image { 
    var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height) 
    var imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil) 

    var leftImageRect:CGRect = CGRectMake(0, 0, image.size.width/2.0, image.size.height) 
    var leftImageRef = image.CGImageForProposedRect(&leftImageRect, context: nil, hints: nil) 
    var leftImage = NSImage(CGImage:leftImageRef!.takeUnretainedValue(), size:NSZeroSize) 

    var rightImageRect:CGRect = CGRectMake(image.size.width/2.0, 0, image.size.width/2.0, image.size.height) 
    var rightImageRef = image.CGImageForProposedRect(&rightImageRect, context: nil, hints: nil) 
    var rightImage = NSImage(CGImage:rightImageRef!.takeUnretainedValue(), size:NSZeroSize) 
} 

回答

1

似乎与

var leftImageRef = CGImageCreateWithImageInRect(imageRef!.takeUnretainedValue(), leftImageRect) 
var leftImage = NSImage(CGImage:leftImageRef, size:NSZeroSize) 

修复我的问题,更换

var leftImageRef = image.CGImageForProposedRect(&leftImageRect, context: nil, hints: nil) 
var leftImage = NSImage(CGImage:leftImageRef!.takeUnretainedValue(), size:NSZeroSize) 

。但是我仍然不确定为什么,所以如果任何人有更好的解释,我可以选择它作为“正确答案”。

谢谢!