我正在做一些需要浮点灰度图像数据的图像处理工作。下面的方法-imagePlanarFData:
目前是我如何从输入CIImage
中提取此数据:
- (NSData *)byteswapPlanarFData:(NSData *)data
swapInPlace:(BOOL)swapInPlace;
{
NSData * outputData = swapInPlace ? data : [data mutableCopy];
const int32_t * image = [outputData bytes];
size_t length = [outputData length] / sizeof(*image);
for (int i = 0;
i < length;
i++) {
int32_t * val = (int32_t *)&image[i];
*val = OSSwapBigToHostInt32(*val);
}
return outputData;
}
- (NSData *)imagePlanarFData:(CIImage *)processedImage;
{
NSSize size = [processedImage extent].size;
if (size.width == 0) {
return nil;
}
dispatch_once(&_onceToken, ^ {
_colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
_bytesPerRow = size.width * sizeof(float);
_cgx = CGBitmapContextCreate(NULL, size.width, size.height, 32, _bytesPerRow, _colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents);
// Work-around for CIImage drawing EXC_BAD_ACCESS when running with Guard Malloc;
// see <http://stackoverflow.com/questions/11689233/ciimage-drawing-exc-bad-access>
NSDictionary * options = nil;
if (getenv("MallocStackLogging") || getenv("MallocStackLoggingNoCompact")) {
NSLog(@"Forcing CIImageContext to use software rendering; see %@",
@"<http://stackoverflow.com/questions/11689233/ciimage-drawing-exc-bad-access>");
options = @{ kCIContextUseSoftwareRenderer: @YES };
}
_cix = [CIContext contextWithCGContext:_cgx
options:options];
_rect = CGRectMake(0, 0, size.width, size.height);
});
float * data = CGBitmapContextGetData(_cgx);
CGContextClearRect(_cgx, _rect);
[_cix drawImage:processedImage
inRect:_rect
fromRect:_rect];
NSData * pixelData = [NSData dataWithBytesNoCopy:data
length:_bytesPerRow * size.height
freeWhenDone:NO];
// For whatever bizarre reason, CoreGraphics uses big-endian floats (!)
return [self byteswapPlanarFData:pixelData swapInPlace:NO];
}
正如评论中所提到的,我很惊讶地看到浮点像素数据以big-endian顺序来自CGBitmapContext。 (我只是通过反复试验来确定这一点。)因此引入了-byteswapPlanarFData:swapInPlace:
方法的额外方法,并且一切都与世界相似......一时间。
但现在我想将此处理后的数据提交回到CGImage
。
以前我的代码使用上面提取的float * data
缓冲区并直接使用它来渲染新的CGImage
,然后将其包装在NSImage
中,如下所示:
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
size:size];
CGImageRelease(renderedImage);
所以现在我这样做:
pixelData = [mumble imagePlanarFData:processedImage];
// Swap data back to how it was before...
pixelData = [mumble byteswapPlanarFData:pixelData
swapInPlace:YES];
float * data = (float *)[pixelData bytes];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
size:size];
CGImageRelease(renderedImage);
上面的不工作。相反,我得到了一个损坏的图像和来自CoreGraphics的投诉流:
Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: CMMConvLut::ConvertFloat 1 input (inf)
Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: ApplySequenceToBitmap failed (-171)
Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: ColorSyncTransformConvert - failed width = 160 height = 199 dstDepth = 7 dstLayout = 0 dstBytesPerRow = 1920 srcDepth = 7 srcLayout = 0 srcBytesPerRow = 640
我哪里出错?
答案 0 :(得分:0)
啊,发现了问题......在我将字节交换调用添加到-imagePlanarFData:
并且还交换以下字节之前,有一个遗留的例程:地方...
我仍然喜欢听到一些解释,说明为什么CoreGraphics会在我们的小端平台上以big-endian字节顺序期望这些值。