我使用glReadPixels抓取我的opengl场景的屏幕截图,然后在IOS 4上使用AVAssetWriter将它们转换成视频。我的问题是我需要将alpha通道传递给视频,该视频只接受kCVPixelFormatType_32ARGB和glReadPixels检索RGBA。所以基本上我需要一种方法将我的RGBA转换为ARGB,换句话说就是把alpha字节放在第一位。
int depth = 4;
unsigned char buffer[width * height * depth];
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL );
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault);
UIWindow* parentWindow = [self window];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess);
NSParameterAssert(pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, parentWindow.transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer; // chuck pixel buffer into AVAssetWriter
以为我会发布整个代码,因为我可以帮助其他人。
干杯
答案 0 :(得分:6)
注意:我假设每个通道有8位。如果不是这样,请相应调整。
要最后移动alpha位,您需要执行 rotation 。这通常通过位移最容易表达。
在这种情况下,您希望将RGB位向右移动8位,将A位向左移动24位。然后应使用按位OR将这两个值放在一起,这样就变成argb = (rgba >> 8) | (rgba << 24)
。
答案 1 :(得分:2)
更好的是,不要使用ARGB对视频进行编码,发送AVAssetWriter BGRA帧。正如我在this answer中描述的那样,这样做可以让你在iPhone 4上以30 FPS编码640x480视频,为720p视频编码高达20 FPS。 iPhone 4S可以使用此功能以30 FPS的速度一直播放到1080p视频。
此外,您还需要确保使用像素缓冲池,而不是每次都重新创建像素缓冲区。从该答案复制代码,您可以使用以下命令配置AVAssetWriter:
NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
NSLog(@"Error: %@", error);
}
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput];
然后使用此代码使用glReadPixels()
抓取每个渲染帧:
CVPixelBufferRef pixel_buffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
return;
}
else
{
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}
// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
使用glReadPixels()
时,您需要调整帧的颜色,因此我使用了以下代码的屏幕外FBO和片段着色器来执行此操作:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra;
}
然而,在iOS 5.0上获取OpenGL ES内容比glReadPixels()
更快,我在this answer中描述了{{1}}。关于该过程的好处是纹理已经以BGRA像素格式存储内容,因此您可以直接将封装像素缓冲区提供给AVAssetWriter而无需任何颜色转换,并且仍然可以看到很好的编码速度。
答案 2 :(得分:2)
我意识到这个问题已得到解答,但我想确保人们了解vImage,它是Accelerate框架的一部分,可在iOS和OSX中使用。我的理解是,Core Graphics使用vImage在位图上执行CPU绑定的向量操作。
您希望将ARGB转换为RGBA的特定API是vImagePermuteChannels_ARGB8888。还有一些API可以将RGB转换为ARGB / XRGB,翻转图像,覆盖频道等等。这是一种隐藏的宝石!
更新:Brad Larson对基本相同的问题here写了一个很好的答案。
答案 3 :(得分:0)
是每个通道的8位,所以它是这样的:
int depth = 4;
int width = 320;
int height = 480;
unsigned char buffer[width * height * depth];
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
for(int i = 0; i < width; i++){
for(int j = 0; j < height; j++){
buffer[i*j] = (buffer[i*j] >> 8) | (buffer[i*j] << 24);
}
}
我似乎无法让它发挥作用
答案 4 :(得分:0)
我确信可以忽略alpha值。所以你可以用像素缓冲区数组移位一个字节来做memcpy
:
void *buffer = malloc(width*height*4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
…
memcpy(pxdata + 1, buffer, width*height*4 - 1);
答案 5 :(得分:-1)
+ (UIImage *) createARGBImageFromRGBAImage: (UIImage *)image {
CGSize dimensions = [image size];
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * dimensions.width;
NSUInteger bitsPerComponent = 8;
unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height);
unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height);
CGColorSpaceRef colorSpace = NULL;
CGContextRef context = NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
for (int x = 0; x < dimensions.width; x++) {
for (int y = 0; y < dimensions.height; y++) {
NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel;
argb[offset + 0] = rgba[offset + 3];
argb[offset + 1] = rgba[offset + 0];
argb[offset + 2] = rgba[offset + 1];
argb[offset + 3] = rgba[offset + 2];
}
}
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(argb, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage: imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(rgba);
free(argb);
return image;
}