我试图制作的应用程序的主要目标是点对点视频流。 (类似于FaceTime使用蓝牙/ WiFi)。
使用AVFoundation,我能够捕获视频/音频样本缓冲区。然后我发送视频/音频样本缓冲区数据。现在的问题是在接收方处理样本缓冲区数据。
对于视频样本缓冲区,我能够从样本缓冲区中获取UIImage。但是对于音频样本缓冲区,我不知道如何处理它以便我可以播放音频。
所以问题是如何处理/播放音频样本缓冲区?
现在我只是绘制波形,就像苹果的波浪样本代码一样:
CMSampleBufferRef sampleBuffer;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {
SInt16 subSet[numSamples / numSamplesToRead];
for (int j = 0; j < numSamples / numSamplesToRead; j++)
subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];
SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
double scaledSample = (double) ((audioSample / SINT16_MAX));
// plot waveform using scaledSample
[updateUI:scaledSample];
}
答案 0 :(得分:-5)
显示您可以使用的视频 (这里是:获取ARGB图片并转换为Qt(诺基亚qt)QImage,你可以用其他图片替换)
将其置于委托类
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
SVideoSample sample;
sample.pImage = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width = CVPixelBufferGetWidth(imageBuffer);
sample.height = CVPixelBufferGetHeight(imageBuffer);
QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);
self->m_receiver->eventReceived(img);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];