嘿那里,我正在尝试使用AVCaptureSession从iphone相机访问原始数据。我遵循Apple提供的指南(link here)。
来自samplebuffer的原始数据是YUV格式(我这里关于原始视频帧格式是正确的吗?),如何直接从存储在samplebuffer中的原始数据中获取Y分量的数据。
答案 0 :(得分:20)
设置返回原始相机帧的AVCaptureVideoDataOutput时,您可以使用以下代码设置帧的格式:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
在这种情况下,指定了BGRA像素格式(我用它来匹配OpenGL ES纹理的颜色格式)。该格式的每个像素按此顺序具有蓝色,绿色,红色和alpha的一个字节。使用它可以很容易地拉出颜色组件,但是你需要通过从相机原生的YUV颜色空间进行转换来牺牲一点性能。
其他支持的色彩空间在较新设备上为kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
和kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
,在iPhone 3G上为kCVPixelFormatType_422YpCbCr8
。 VideoRange
或FullRange
后缀仅指示字节是否在16 - 235之间返回Y和16 - 240(对于UV)或完整0 - 255(每个组件)。
我相信AVCaptureVideoDataOutput实例使用的默认颜色空间是YUV 4:2:0平面颜色空间(除了在iPhone 3G上,它是YUV 4:2:2交错)。这意味着视频帧中包含两个图像数据平面,Y平面首先出现。对于结果图像中的每个像素,该像素处的Y值都有一个字节。
您可以通过在委托回调中实现类似的内容来获取此原始Y数据:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
// Do something with the raw pixels here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
然后,您可以计算出图像上每个X,Y坐标的帧数据中的位置,并拉出与该坐标处的Y分量对应的字节。
来自WWDC 2010的Apple的FindMyiCone示例(可与视频一起访问)显示了如何处理来自每个帧的原始BGRA数据。我还创建了一个示例应用程序,您可以下载here的代码,该代码使用iPhone相机的实时视频执行color-based object tracking。两者都展示了如何处理原始像素数据,但这些都不能在YUV颜色空间中工作。
答案 1 :(得分:18)
除了Brad的答案和您自己的代码之外,您还需要考虑以下事项:
由于您的图像有两个独立的平面,因此 CVPixelBufferGetBaseAddress 函数不会返回平面的基地址,而是返回其他数据结构的基址。这可能是由于当前的实现,您得到的地址足够接近第一个平面,以便您可以看到图像。但这就是它被转移并且左上方有垃圾的原因。接收第一架飞机的正确方法是:
unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
图像中的行可能比图像的宽度长(由于舍入)。这就是为什么有单独的函数来获取每行的宽度和字节数。你现在没有这个问题。但这可能会随着iOS的下一个版本而改变。所以你的代码应该是:
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int size = bufferHeight * bytesPerRow ;
unsigned char *pixel = (unsigned char*)malloc(size);
unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy (pixel, rowBase, size);
请注意,您的代码在iPhone 3G上会失败。
答案 2 :(得分:7)
如果您只需要亮度通道,我建议不要使用BGRA格式,因为它带有转换开销。 Apple建议使用BGRA,如果你正在渲染东西,但你不需要它来提取亮度信息。正如布拉德已经提到的,最有效的格式是相机原生的YUV格式。
但是,从样本缓冲区中提取正确的字节有点棘手,特别是对于带有交错YUV 422格式的iPhone 3G。所以这是我的代码,适用于iPhone 3G,3GS,iPod Touch 4和iPhone 4S。
#pragma mark -
#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate Methods
#if !(TARGET_IPHONE_SIMULATOR)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
{
// get image buffer reference
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// extract needed informations from image buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CGSize resolution = CGSizeMake(CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer));
// variables for grayscaleBuffer
void *grayscaleBuffer = 0;
size_t grayscaleBufferSize = 0;
// the pixelFormat differs between iPhone 3G and later models
OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);
if (pixelFormat == '2vuy') { // iPhone 3G
// kCVPixelFormatType_422YpCbCr8 = '2vuy',
/* Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1 */
// copy every second byte (luminance bytes form Y-channel) to new buffer
grayscaleBufferSize = bufferSize/2;
grayscaleBuffer = malloc(grayscaleBufferSize);
if (grayscaleBuffer == NULL) {
NSLog(@"ERROR in %@:%@:%d: couldn't allocate memory for grayscaleBuffer!", NSStringFromClass([self class]), NSStringFromSelector(_cmd), __LINE__);
return nil; }
memset(grayscaleBuffer, 0, grayscaleBufferSize);
void *sourceMemPos = baseAddress + 1;
void *destinationMemPos = grayscaleBuffer;
void *destinationEnd = grayscaleBuffer + grayscaleBufferSize;
while (destinationMemPos <= destinationEnd) {
memcpy(destinationMemPos, sourceMemPos, 1);
destinationMemPos += 1;
sourceMemPos += 2;
}
}
if (pixelFormat == '420v' || pixelFormat == '420f') {
// kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v',
// kCVPixelFormatType_420YpCbCr8BiPlanarFullRange = '420f',
// Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]).
// Bi-Planar Component Y'CbCr 8-bit 4:2:0, full-range (luma=[0,255] chroma=[1,255]).
// baseAddress points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct
// i.e.: Y-channel in this format is in the first third of the buffer!
int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
grayscaleBufferSize = resolution.height * bytesPerRow ;
grayscaleBuffer = malloc(grayscaleBufferSize);
if (grayscaleBuffer == NULL) {
NSLog(@"ERROR in %@:%@:%d: couldn't allocate memory for grayscaleBuffer!", NSStringFromClass([self class]), NSStringFromSelector(_cmd), __LINE__);
return nil; }
memset(grayscaleBuffer, 0, grayscaleBufferSize);
memcpy (grayscaleBuffer, baseAddress, grayscaleBufferSize);
}
// do whatever you want with the grayscale buffer
...
// clean-up
free(grayscaleBuffer);
}
#endif
答案 3 :(得分:2)
这只是其他所有人的辛勤工作的结果,在其他线程之上,对于任何发现它有用的人都转换为swift 3。
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.readOnly)
let pixelFormatType = CVPixelBufferGetPixelFormatType(pixelBuffer)
if pixelFormatType == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
|| pixelFormatType == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange {
let bufferHeight = CVPixelBufferGetHeight(pixelBuffer)
let bufferWidth = CVPixelBufferGetWidth(pixelBuffer)
let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let size = bufferHeight * lumaBytesPerRow
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let lumaByteBuffer = unsafeBitCast(lumaBaseAddress, to:UnsafeMutablePointer<UInt8>.self)
let releaseDataCallback: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let dataProvider = CGDataProvider(dataInfo: nil, data: lumaByteBuffer, size: size, releaseData: releaseDataCallback) {
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue)
let cgImage = CGImage(width: bufferWidth, height: bufferHeight, bitsPerComponent: 8, bitsPerPixel: 8, bytesPerRow: lumaBytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo, provider: dataProvider, decode: nil, shouldInterpolate: false, intent: CGColorRenderingIntent.defaultIntent)
let greyscaleImage = UIImage(cgImage: cgImage!)
// do what you want with the greyscale image.
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.readOnly)
}
}