我目前的设置如下(基于Brad Larson的ColorTrackingCamera项目):
我正在使用AVCaptureSession
设置为AVCaptureSessionPreset640x480
,我让输出通过OpenGL场景作为纹理运行。然后由片段着色器操纵此纹理。
我需要这种“低质量”预设,因为我想在用户预览时保持高帧率。然后,当用户拍摄静止照片时,我想切换到更高质量的输出。
首先我想我可以更改sessionPreset
上的AVCaptureSession
,但这会强制相机重新聚焦,从而破坏可用性。
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession commitConfiguration];
目前我正在尝试向AVCaptureSession添加第二个AVCaptureStillImageOutput
,但我得到一个空像素缓冲区,所以我觉得我有点卡住了。
这是我的会话设置代码:
...
// Add the video frame output
[captureSession beginConfiguration];
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(@"Couldn't add video output");
}
[captureSession commitConfiguration];
// Add still output
[captureSession beginConfiguration];
stillOutput = [[AVCaptureStillImageOutput alloc] init];
if([captureSession canAddOutput:stillOutput])
{
[captureSession addOutput:stillOutput];
}
else
{
NSLog(@"Couldn't add still output");
}
[captureSession commitConfiguration];
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if(![captureSession isRunning])
{
[captureSession startRunning];
};
...
这是我的捕获方法:
- (void)prepareForHighResolutionOutput
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
NSLog(@"%i x %i", width, height);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}];
}
(width
和height
结果为0)
我已经阅读了AVFoundation文档的文档,但似乎我没有得到必要的东西。
答案 0 :(得分:3)
我为我的具体问题找到了解决方案。如果有人遇到同样的问题,我希望它可以作为指导。
帧速率显着下降的原因与像素格式之间的内部转换有关。明确设置pixelformat后,帧率增加。
在我的情况下,我使用以下方法创建BGRA纹理:
// Let Core Video create the OpenGL texture from pixelbuffer
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, NULL,
GL_TEXTURE_2D, GL_RGBA, width, height, GL_BGRA,
GL_UNSIGNED_BYTE, 0, &videoTexture);
因此,当我设置AVCaptureStillImageOutput
实例时,我将代码更改为:
// Add still output
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
if([captureSession canAddOutput:stillOutput])
{
[captureSession addOutput:stillOutput];
}
else
{
NSLog(@"Couldn't add still output");
}
我希望有一天能帮到某人;)