我搜索了各种Apple的文档和StackOverflow答案,但没有什么真正帮助,仍然有一个空白的应用程序的窗口。我正在尝试在NSWindow中显示像素缓冲区的内容,为此我已经分配了一个缓冲区:
UInt8* row = (UInt8 *) malloc(WINDOW_WIDTH * WINDOW_HEIGHT * bytes_per_pixel);
UInt32 pitch = (WINDOW_WIDTH * bytes_per_pixel);
// For each row
for (UInt32 y = 0; y < WINDOW_HEIGHT; ++y) {
Pixel* pixel = (Pixel *) row;
// For each pixel in a row
for (UInt32 x = 0; x < WINDOW_WIDTH; ++x) {
*pixel++ = 0xFF000000;
}
row += pitch;
}
这应该准备一个红色像素的缓冲区。然后我正在创建一个NSBitmapImageRep
:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(u8 *) row
pixelsWide:WINDOW_WIDTH
pixelsHigh:WINDOW_HEIGHT
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:WINDOW_WIDTH * 4
bitsPerPixel:32];
然后转换为NSImage
:
NSSize imageSize = NSMakeSize(CGImageGetWidth([imageRep CGImage]), CGImageGetHeight([imageRep CGImage]));
NSImage *image = [[NSImage alloc] initWithSize:imageSize];
[image addRepresentation:imageRep];
然后我正在配置视图:
NSView *view = [window contentView];
[view setWantsLayer: YES];
[[view layer] setContents: image];
可悲的是,这并没有给我我期望的结果。
答案 0 :(得分:1)
以下是您的代码存在的一些问题:
您在每个y循环结束时将row
递增pitch
。您从未将指针保存到缓冲区的开头。创建NSBitmapImageRep
时,会传递一个超出缓冲区末尾的指针。
您正在传递row
作为planes
的第一个(initWithBitmapDataPlanes:...
)参数,但您需要传递&row
。 The documentation says
一个字符指针数组,每个字符指针都指向一个包含原始图像数据的缓冲区。[...]
“字符指针数组”表示(在C中)您将指针传递给指针。
你说“这应该准备一个带有红色像素的缓冲区。”但你用0xFF000000
填充缓冲区,然后你说hasAlpha:YES
。根据初始化程序使用的字节顺序,您已将Alpha通道设置为0,或者您已将Alpha通道设置为0xFF,但将所有颜色通道设置为0。
实际上,您已将每个像素设置为不透明黑色(alpha = 0xFF,颜色全为零)。尝试将每个像素设置为0xFF00007F并且您将获得暗红色(alpha = 0xFF,红色= 0x7F)。
因此:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
@implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
size_t pitch = width * sizeof(Pixel);
uint8_t *buffer = malloc(pitch * height);
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:&buffer
pixelsWide:width pixelsHigh:height
bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:pitch bitsPerPixel:sizeof(Pixel) * 8];
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[image addRepresentation:rep];
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = image;
}
@end
结果:
请注意,我没有免费buffer
。如果您在buffer
被销毁之前免费rep
,那么事情就会出错。例如,如果您只是将free(buffer)
添加到applicationDidFinishLaunching:
的末尾,则该窗口显示为灰色。
这是一个棘手的问题需要解决。如果您使用Core Graphics,则可以正确处理内存管理。您可以要求Core Graphics为您分配缓冲区(通过传递NULL
而不是有效指针),并在适当时释放缓冲区。
您必须释放您创建的Core Graphics对象以避免内存泄漏,但您可以在完成后立即执行此操作。产品&gt; Analyze命令还可以帮助您查找Core Graphics对象的泄漏,但无法帮助您找到未释放的malloc
块的泄漏。
这是核心图形解决方案的样子:
typedef struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} Pixel;
@implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
size_t width = self.window.contentView.bounds.size.width;
size_t height = self.window.contentView.bounds.size.height;
CGColorSpaceRef rgb = CGColorSpaceCreateWithName(kCGColorSpaceLinearSRGB);
CGContextRef gc = CGBitmapContextCreate(NULL, width, height, 8, 0, rgb, kCGImageByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(rgb);
size_t pitch = CGBitmapContextGetBytesPerRow(gc);
uint8_t *buffer = CGBitmapContextGetData(gc);
Pixel color = { .red=127, .green=0, .blue=0, .alpha=255 };
for (size_t y = 0; y < height; ++y) {
Pixel *row = (Pixel *)(buffer + y * pitch);
for (size_t x = 0; x < width; ++x) {
row[x] = color;
}
}
CGImageRef image = CGBitmapContextCreateImage(gc);
CGContextRelease(gc);
self.window.contentView.wantsLayer = YES;
self.window.contentView.layer.contents = (__bridge id)image;
CGImageRelease(image);
}
@end
答案 1 :(得分:0)
请注意确定发生了什么,但这里的代码已经使用了多年:
static NSImage* NewImageFromRGBA( const UInt8* rawRGBA, NSInteger width, NSInteger height )
{
size_t rawRGBASize = height*width*4/* sizeof(RGBA) = 4 */;
// Create a bitmap representation, allowing NSBitmapImageRep to allocate its own data buffer
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
NSCAssert(imageRep!=nil,@"failed to create NSBitmapImageRep");
NSCAssert((size_t)[imageRep bytesPerPlane]==rawRGBASize,@"alignment or size of CGContext buffer and NSImageRep do not agree");
// Copy the raw bitmap image into the new image representation
memcpy([imageRep bitmapData],rawRGBA,rawRGBASize);
// Create an empty NSImage then add the bitmap representation to it
NSImage* image = [[NSImage alloc] initWithSize:NSMakeSize(width,height)];
[image addRepresentation:imageRep];
return image;
}