长时间stackoverflow阅读器,第一次海报。
我正在尝试创建名为 CloudWriter 的iPad应用。应用程序的概念是绘制您在云中看到的形状。下载应用程序后,在启动 CloudWriter 后,将向用户显示实时视频背景(来自后置摄像头),其上方有一个OpenGL绘图层。用户可以打开应用程序,将iPad指向天空中的云,并绘制他们在显示屏上看到的内容。
该应用程序的一个主要功能是让用户记录会话期间显示屏上发生的事情的视频屏幕截图。实时视频输入和“绘图”视图将成为平面(合并)视频。
有关目前如何运作的一些假设和背景信息。
此时,想法是用户可以将iPad3相机指向天空中的某些云,并绘制他们看到的形状。此功能完美无瑕。当我尝试对用户会话进行“平面”视频屏幕捕获时,我开始遇到性能问题。由此产生的“平面”视频会使摄像机输入与用户绘图实时重叠。
具有与我们正在寻找的功能类似的应用程序的一个很好的例子是Board Cam,可在App Store中找到。
要启动此过程,视图中始终会显示“记录”按钮。当用户点击录制按钮时,期望是在再次点击录制按钮之前,会话将被记录为“平面”视频屏幕捕获。
当用户点击“录制”按钮时,代码
中会发生以下情况AVCaptureSessionPreset 已从 AVCaptureSessionPresetMedium 更改为 AVCaptureSessionPresetPhoto ,允许访问
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
didOutputSampleBuffer开始获取数据并从当前视频缓冲区数据创建图像。它通过调用
来完成此操作- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
应用程序根视图控制器开始覆盖drawRect以创建展平图像,在最终视频中用作单独的帧。
要创建一个平面图像(用作单独的帧),在根ViewController的drawRect函数中,我们抓取AVCamCaptureManager的 didOutputSampleBuffer 代码接收的最后一帧。那是在
之下- (void) drawRect:(CGRect)rect {
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
if( isRecording)
[[self.layer presentationLayer] renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
UIImage *bottomImage = background;
if(((AVCamCaptureManager *)self.captureManager).currentImage != nil && isVideoBGActive )
{
UIImage *image = [((AVCamCaptureManager *)self.mainContentScreen.captureManager).currentImage retain];//[UIImage
CGSize newSize = background.size;
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
if( isRecording )
{
if( [self.mainContentScreen isVideoBGActive] && _recording)
{
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
}
// Apply supplied opacity
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.currentScreen = newImage;
[image release];
}
if (isRecording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
float delayRemaining = (1.0 / self.frameRate) - processingSeconds;
CGContextRelease(context);
//redraw at the specified framerate
[self performSelector:@selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
createBitmapContextOfSize 位于
之下- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace = nil;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width ,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
//CGAffineTransform transform = CGAffineTransformIdentity;
//transform = CGAffineTransformScale(transform, size.width * .25, size.height * .25);
//CGAffineTransformScale(transform, 1024, 768);
CGColorSpaceRelease( colorSpace );
return context;
}
- (void)captureOutput:didOutputSampleBuffer fromConnection
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[self imageFromSampleBuffer:sampleBuffer];
}
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer
// Create a UIImage from sample buffer data - modifed not to return a UIImage *, rather store it in self.currentImage
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// unlock the memory, do other stuff, but don't forget:
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// uint8_t *tmp = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
int bytes = CVPixelBufferGetBytesPerRow(imageBuffer); // determine number of bytes from height * bytes per row
//void *baseAddress = malloc(bytes);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = malloc( bytes * height );
memcpy( baseAddress, CVPixelBufferGetBaseAddress(imageBuffer), bytes * height );
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytes, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
// CGContextScaleCTM(context, 0.25, 0.25); //scale down to size
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(baseAddress);
self.currentImage = [UIImage imageWithCGImage:quartzImage scale:0.25 orientation:UIImageOrientationUp];
// Release the Quartz image
CGImageRelease(quartzImage);
return nil;
}
最后,我使用 writeVideoFrameAtTime:CMTimeMake 将其写入磁盘,代码如下:
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(@"Not ready for video data");
}
else {
@synchronized (self) {
UIImage* newFrame = [self.currentScreen retain];
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
if( image == nil )
{
[newFrame release];
CVPixelBufferRelease( pixelBuffer );
CGImageRelease(cgImage);
return;
}
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(@"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(@"Warning: Unable to write buffer to video");
}
//clean up
[newFrame release];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}
}
一旦isRecording设置为YES,iPad 3的性能就会从大约20FPS变为5FPS。使用 Insturments ,我可以看到以下代码块(来自 drawRect:)是导致性能降至无法使用的级别的原因。
if( _recording )
{
if( [self.mainContentScreen isVideoBGActive] && _recording)
{
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
}
// Apply supplied opacity
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
}
我的理解是,因为我正在捕捉全屏,所以我们失去了“drawInRect”应该给予的所有好处。具体来说,我在谈论更快的重绘,因为理论上,我们只更新显示的一小部分(传入CGRect)。再次,捕捉全屏,我不确定 drawInRect 可以提供几乎同样多的好处。
为了提高性能,我想如果我要缩小 imageFromSampleBuffer 提供的图像以及绘图视图的当前上下文,我会看到帧速率的增加。不幸的是,CoreGrapics.Framework不是我过去曾经使用过的东西,所以我不知道我能够有效地将性能调整到可接受的水平。
任何CoreGraphics Guru都有输入?
此外,ARC因某些代码被关闭,分析仪显示一次泄漏,但我认为这是误报。
即将推出 CloudWriter ™,其中天空极限!
答案 0 :(得分:1)
如果你想要不错的录音性能,你将需要避免使用Core Graphics重新绘制内容。坚持使用纯OpenGL ES。
你说你已经在OpenGL ES中完成了手指画,所以你应该能够将它渲染到纹理中。实时视频馈送也可以指向纹理。从那里,您可以根据手指绘画纹理中的Alpha通道对两者进行叠加混合。
使用OpenGL ES 2.0着色器非常容易。实际上,我的GPUImage开源框架可以处理视频捕获和混合的部分(如果您从绘制代码中提供渲染纹理,请参阅FilterShowcase示例应用程序以获取覆盖在视频上的图像示例)。您必须确保绘画使用的是OpenGL ES 2.0,而不是1.1,并且它与GPUImage OpenGL ES上下文具有相同的共享组,但我将在CubeExample应用程序中展示如何执行此操作。
我还可以通过使用纹理缓存(在iOS 5.0 +上)以高性能方式为您在GPUImage中处理视频录制。
你应该能够通过使用类似我的框架并保留在OpenGL ES中,以稳定的30 FPS为720p视频(iPad 2)或1080p视频(iPad 3)录制这种混合。