如何使用vImage从图像中“提取”一个频道? 我现在的方法是:
vImage_Buffer redBuffer;
redBuffer.data = (void*)redImageData.bytes;
redBuffer.width = size.width;
redBuffer.height = size.height;
redBuffer.rowBytes = [redImageData length]/size.height;
vImage_Buffer redBuffer2;
redBuffer2.width = size.width;
redBuffer2.height = size.height;
redBuffer2.rowBytes = size.width;
vImageConvert_RGB888toPlanar8(&redBuffer, &redBuffer2, nil, nil, kvImageNoFlags);
redBuffer是一个工作图像,没有错误,但是vImageConvert给了我一个EXC_BAD_ACCESS,我尝试了很多选项,有些选项导致EXC_BAD_ACCESS,有些导致输出图像损坏。我做错了什么?
所以多亏Rob Keniger,我设法提取了频道,但是如果我尝试将它们与其他频道一起放回去,那么图像会被水平拉伸并且会有RGB条纹。他们为什么不正确地重新组合在一起?这是一个提取通道然后将它们重新组合在一起的示例:
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(@"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(@"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(@"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
答案 0 :(得分:3)
解决方案(或者,至少是一个解决方案的开头):
.accept(MediaType.APPLICATION_JSON)
这对我有用,除了我开发的iOS相机应用程序的OpenCV版本;预览窗口在一侧显示一个白色条纹,每次OpenCV都不像我做的那样。我认为这不是你的问题,因为你的多重线条只有三行代码,所以值得一试。
答案 1 :(得分:2)
您没有初始化redBuffer2
输出缓冲区,因此您无法存储函数的输出。
//allocate a buffer for the output image and check it exists
void *pixelBuffer = malloc(size.width * size.height);
if(pixelBuffer == NULL)
{
NSLog(@"No pixelbuffer");
//handle this somehow
}
//assign the allocated buffer to the vImage buffer struct
redBuffer2.data = pixelBuffer;
答案 2 :(得分:2)
vImageConvert_RGB888toPlanar8不接受vImage_Buffer结构的NULL指针。该函数标记为VIMAGE_NON_NULL(1,2,3,4),这应该会导致编译器抱怨此用法。
在OS X.10或iOS 8上添加了vImageExtractChannel_ARGB8888,但没有RGB888等效项。如果需要,您应该提交功能请求。 http://bugreporter.apple.com同时,你可以制作三个平面8输出缓冲区并抛出两个缓冲区。