我有一个CoreML模型,给我一个CVPixelBufferRef作为灰度的oneComponent8 512x512输出。我正在尝试将此输出转换为其他大小(256x248)的NSArray。现在,我正在尝试以一种非常复杂的方式进行此操作:将CVPixelBufferRef转换为GCImageRef,将GCImageRef转换为NSImage然后调整大小,然后使用NSData制作NSArray。奇怪的是,我已经将这些复杂的代码修补在一起,不断导致程序崩溃。崩溃始终为Exception Type: EXC_BAD_ACCESS (SIGSEGV)
,通常为:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_platform.dylib 0x00007fff74109d09 _platform_memmove$VARIANT$Haswell + 41
或
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libobjc.A.dylib
0x00007fff7274764c objc_release + 28
我不确定到底出了什么问题,但是我猜我正在以某种方式弄乱alloc / init / release?还是lockFocus / unlockFocus滥用?另外,也许有一种从CVPixelBuffer到NSArray并调整大小的简便方法?
//Convert CVPixelBufferRef to CGImageRef
CVPixelBufferLockBaseAddress(prediction.outputImage, kCVPixelBufferLock_ReadOnly);
void *baseAddr = CVPixelBufferGetBaseAddress(prediction.outputImage);
size_t widthCV = CVPixelBufferGetWidth(prediction.outputImage);
size_t heightCV = CVPixelBufferGetHeight(prediction.outputImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef cgContext = CGBitmapContextCreate(baseAddr, widthCV, heightCV, 8, 2*CVPixelBufferGetBytesPerRow(prediction.outputImage), colorSpace, kCGImageAlphaNoneSkipLast);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
CGContextRelease(cgContext);
//CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(prediction.outputImage, kCVPixelBufferLock_ReadOnly);
CVPixelBufferRelease(prediction.outputImage);
//Convert CGImageRef to NSImage
NSRect imageRect = NSMakeRect(0.0, 0.0, 0.0, 0.0);
CGContextRef imageContext = nil;
NSImage* newImage = nil;
imageRect.size.height = CGImageGetHeight(cgImage);
imageRect.size.width = CGImageGetWidth(cgImage);
newImage = [[[NSImage alloc] initWithSize:imageRect.size] autorelease];
[newImage lockFocus];
imageContext = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(imageContext, *(CGRect*)&imageRect, cgImage);
CGImageRelease(cgImage);
[newImage unlockFocus];
//Convert to NSImage and Resize
NSSize oldSize;
oldSize.height = pixHeight;
oldSize.width = pixWidth;
[newImage setScalesWhenResized:YES];
NSImage *ROImask = [[NSImage alloc] initWithSize: oldSize];
[ROImask lockFocus];
[newImage setSize: oldSize];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[newImage drawAtPoint:NSZeroPoint fromRect:CGRectMake(0, 0, oldSize.width, oldSize.height) operation:NSCompositeCopy fraction:1.0];
[ROImask unlockFocus];
//Convert NSImage to array
NSData *imageData = [ROImask TIFFRepresentation];
NSArray *ROIarray = [NSKeyedUnarchiver unarchiveObjectWithData:imageData];
希望您对可能发生的事情或尝试的事情提供一些建议。谢谢