您好我想制作一个在iOS设备之间进行视频通话的应用程序。我研究过opentok和idoubs,但我想从开始就自己做。我搜索了很多但找不到任何解决方案。我试图以一种我认为视频聊天如何工作的方式实现这一目标。直到现在我已经做了以下事情(通过使用流媒体bonjour教程):
创建avcapture会话并在
中获取cmsamplebufferref数据- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection{
if( captureOutput == _captureOutput ){
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer//
CVPixelBufferLockBaseAddress(imageBuffer,0);
//Get information about the image//
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Create a CGImageRef from the CVImageBufferRef//
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
//We release some components
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
previewImage= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
CGImageRelease(newImage);
[uploadImageView performSelectorOnMainThread:@selector(setImage:) withObject:previewImage waitUntilDone:YES];
//We unlock the image buffer//
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
[self sendMIxedData:@"video1"];
}
else if( captureOutput == _audioOutput){
dataA= [[NSMutableData alloc] init];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, ¤tInputAudioBufferList, sizeof(currentInputAudioBufferList), NULL, NULL, 0, &blockBuffer);
//CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &bufferList, sizeof(bufferList), NULL, NULL, 0, &blockBuffer);
for (int y = 0; y < currentInputAudioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = currentInputAudioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[dataA appendBytes:frame length:audioBuffer.mDataByteSize];
}
[self sendMIxedData:@"audio"];
}
现在sendMixeddata方法正在将这些视频/音频字节写入NSStream。
NSData *data = UIImageJPEGRepresentation([self scaleAndRotateImage:previewImage], 1.0);
const uint8_t *message1 = (const uint8_t *)[@"video1" UTF8String];
[_outStream write:message1 maxLength:strlen((char *)message1)];
[_outStream write:(const uint8_t *)[data bytes] maxLength:[data length]];
const uint8_t *message1 = (const uint8_t *)[@"audio" UTF8String];
[_outStream write:message1 maxLength:strlen((char *)message1)];
[_outStream write:(const uint8_t *)[dataA bytes] maxLength:[dataA length]];
现在,在接收设备上的nsstream委托方法中接收字节
问题是,我不知道视频聊天是否有效 <或者
我也没有成功使用接收字节来显示 视频。
我尝试发送“audio”和“video1”字符串以及要知道的字节 如果它的视频或音频。我也尝试过不使用额外的字符串。 图像被正确接收和显示,但音频是如此 扭曲了。
请告诉我这是否是制作视频聊天应用的正确方法 没有。如果是,那么我应该做些什么才能使它可用。例如 :我应该一起发送音频/视频数据而不是单独发送 我的例子。在这里使用简单的bonjour教程,但我将如何 与真实服务器实现相同的目标
因为我被困在这里,请指导我正确的方向。
由于 (抱歉格式化。我试过但无法正确格式化)
答案 0 :(得分:0)
视频流应用使用视频编解码器,如vp8或h.264,它们将击败您的JPEG编码帧。
您应该能够通过执行...
显示您收到的NSDataUIImage *image = [UIImage imageWithData:data];