如何设置AVCaptureVideoDataOutput捕获帧大小?

时间:2013-08-16 11:23:23

标签: ios video avcapturesession

我有一个layerRect,可以显示我的相机图像:

CGRect layerRect = [[videoStreamView layer] bounds];
[[[self captureManager] previewLayer] setBounds:layerRect];
[[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
                                                              CGRectGetMidY(layerRect))];
[[videoStreamView layer] addSublayer:[[self captureManager] previewLayer]];

videoStreamView是我观看视频的视图,即150x150。但我使用setSampleBufferDelegate中的AVCaptureVideoDataOutput,我得到的视频帧是整个相机图像(1280 * 720)。我怎么修改它?谢谢。

4 个答案:

答案 0 :(得分:1)

我认为它受AVCaptureSession sessionPreset财产的控制。我仍然试图弄清楚每个预设值在图像大小等方面的含义。

答案 1 :(得分:0)

试试此代码

AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:[self.captureManager session]];
newCaptureVideoPreviewLayer.frame = bounds;//CGRectMake(bounds.origin.x, bounds.origin.y, bounds.size.height, bounds.size.width);

答案 2 :(得分:0)

@property (nonatomic, retain) AVCaptureVideoPreviewLayer *prevLayer;

然后:

self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
    self.prevLayer.frame = yourRect;
[self.view.layer addSublayer: self.prevLayer];

答案 3 :(得分:0)

也许这解决了?

CGRect bounds = view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));