我有一段代码,用于设置摄像头的捕获会话以使用OpenCV处理帧,然后使用生成的UIImage从帧中设置UIImageView的image属性。当应用程序启动时,图像视图的图像为零,并且在我按下堆栈上的另一个视图控制器然后将其弹出之前不显示任何帧。然后图像保持不变,直到我再次这样做。 NSLog语句显示以大约正确的帧速率调用回调。任何想法为什么它不显示?我将帧速率一直降低到每秒2帧。处理速度不够快吗?
以下是代码:
- (void)setupCaptureSession {
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
output.alwaysDiscardsLateVideoFrames = YES;
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1, 1);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
[self setSession:session];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(@"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[self.delegate cameraCaptureGotFrame:image];
}
答案 0 :(得分:6)
这可能与线程有关 - 尝试:
[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:) withObject:image waitUntilDone:NO];
答案 1 :(得分:3)
这看起来像是一个线程问题。您无法在主线程中的任何其他线程中更新您的视图。在您的设置中,这是好的,在辅助线程中调用委托函数 captureOutput:didOutputSampleBuffer:。所以你不能从那里设置图像视图。艺术Gillespie的答案是解决它的一种方法,如果你可以摆脱糟糕的访问错误。
另一种方法是修改 captureOutput:didOutputSampleBuffer:中的示例缓冲区,并通过向捕获会话添加 AVCaptureVideoPreviewLayer 实例来显示。如果您只修改图像的一小部分,例如突出显示某些内容,那肯定是首选方式。
顺便说一句:您的错误访问错误可能会因为您没有在二级线程中保留创建的图像而产生错误,因此在主线程上调用 cameraCaptureGotFrame 之前它将被释放。
<强>更新强>: 要正确保留图像,请增加 captureOutput:didOutputSampleBuffer:(在辅助线程中)中的引用计数,并在 cameraCaptureGotFrame:中(在主线程中)将其递减。
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// increment ref count
[image retain];
[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:)
withObject:image waitUntilDone:NO];
}
- (void) cameraCaptureGotFrame:(UIImage*)image
{
// whatever this function does, e.g.:
imageView.image = image;
// decrement ref count
[image release];
}
如果不增加引用计数,则在主线程中调用 cameraCaptureGotFrame:之前,第二个线程的自动释放池将释放该图像。如果你没有在主线程中减少它,那么图像永远不会被释放,并且你会在几秒钟内耗尽内存。
答案 2 :(得分:0)
在每次新的图片属性更新后,你是否在UIImageView上进行了setNeedsDisplay?
编辑:
您在何时何地更新图片视图中的背景图片属性?