使用AudioUnits的双工音频通信

时间:2014-02-04 07:08:10

标签: ios core-audio record playback audiounit

我正在开发一个具有以下要求的应用:

  1. 从iOS设备(iPhone / iPad)录制实时音频并通过网络发送至服务器
  2. 播放从iOS设备(iPhone / iPad)上的网络服务器接收的音频
  3. 上述事情需要同时进行。

    我已经使用了AudioUnit

    我遇到了一个问题,我听到的是同样的音频,我说的是iPhone麦克风,而不是从网络服务器收到的音频。

    我已经搜索了很多关于如何避免这种情况但没有得到解决方案。

    如果有人遇到同样的问题并找到了解决办法,分享它会有很大帮助。

    这是我的初始化音频单元的代码

    -(void)initializeAudioUnit
    {
    
        audioUnit = NULL;
        // Describe audio component
        AudioComponentDescription desc;
        desc.componentType = kAudioUnitType_Output;
        desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
        desc.componentFlags = 0;
        desc.componentFlagsMask = 0;
        desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    
        // Get component
        AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
    
        // Get audio units
        status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    
    
        UInt32 flag = 1;
        //enable IO for recording
        status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    
        status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
                                    kAudioUnitScope_Output,
                                    kOutputBus,
                                    &flag,
                                    sizeof(flag));
    
    
        AudioStreamBasicDescription audioStreamBasicDescription;
    
        // Describe format
        audioStreamBasicDescription.mSampleRate         = 16000;
        audioStreamBasicDescription.mFormatID           = kAudioFormatLinearPCM;
        audioStreamBasicDescription.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
        audioStreamBasicDescription.mFramesPerPacket    = 1;
        audioStreamBasicDescription.mChannelsPerFrame   = 1;
        audioStreamBasicDescription.mBitsPerChannel     = 16;
        audioStreamBasicDescription.mBytesPerPacket     = 2;
        audioStreamBasicDescription.mBytesPerFrame      = 2;
    
    
    
        status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioStreamBasicDescription,
                                  sizeof(audioStreamBasicDescription));
        NSLog(@"Status[%d]",(int)status);
    
    
    status = AudioUnitSetProperty(audioUnit,
                                    kAudioUnitProperty_StreamFormat,
                                    kAudioUnitScope_Input,
                                    kOutputBus,
                                    &audioStreamBasicDescription,
                                    sizeof(audioStreamBasicDescription));
    NSLog(@"Status[%d]",(int)status);
    
    
        AURenderCallbackStruct callbackStruct;
    
    
        // Set input callback
        callbackStruct.inputProc = recordingCallback;
        callbackStruct.inputProcRefCon = (__bridge void *)(self);
        status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    
      callbackStruct.inputProc = playbackCallback;
          callbackStruct.inputProcRefCon = (__bridge void *)(self);
      status = AudioUnitSetProperty(audioUnit,
                                    kAudioUnitProperty_SetRenderCallback,
                                    kAudioUnitScope_Global,
                                    kOutputBus,
                                    &callbackStruct,
                                    sizeof(callbackStruct));
        flag=0;
    
    status = AudioUnitSetProperty(audioUnit,
                                    kAudioUnitProperty_ShouldAllocateBuffer,
                                    kAudioUnitScope_Output,
                                    kInputBus,
                                    &flag,
                                    sizeof(flag));
    
    }
    

    录制回叫

    static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32    inNumberFrames,AudioBufferList *ioData) 
    {    
        MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
    
        AudioBuffer tempBuffer;
        tempBuffer.mNumberChannels = 1;
        tempBuffer.mDataByteSize = inNumberFrames * 2;
        tempBuffer.mData = malloc(inNumberFrames *2);
    
        AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = tempBuffer;
    
    
    
    
    
        OSStatus status;
        status = AudioUnitRender(THIS->audioUnit,
                             ioActionFlags,
                             inTimeStamp,
                             kInputBus,
                             inNumberFrames,
                             &bufferList);
    
        if (noErr != status) {
    
            printf("AudioUnitRender error: %d", (int)status);
            return noErr;
        }
    
        tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));
    
        [THIS processAudio:&bufferList];
    
        free(bufferList.mBuffers[0].mData);
    
        return noErr;
    }
    

    播放回叫

    static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {
    
    
    NSLog(@"In play back call back");
    
    
    MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;
    
    
    
    int32_t availableBytes=0;
    
    
      char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
      NSLog(@"bytes available in buffer[%d]",availableBytes);
      decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
      ConsumeReadBytes(&(THIS->mybuffer), availableBytes); 
    
      memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);
    
    
     return noErr;
    }
    

    处理从MIC录制的音频

    - (void) processAudio: (AudioBufferList*) bufferList
    {
        AudioBuffer sourceBuffer = bufferList->mBuffers[0];
    
        //    NSLog(@"Origin size: %d", (int)sourceBuffer.mDataByteSize);
        int size = 0;
        encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
        [self performSelectorOnMainThread:@selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];
    
        NSLog(@"Encoded size: %i", size);
    
    } 
    

1 个答案:

答案 0 :(得分:0)

您的playbackCallback渲染回调(未显示)负责发送到RemoteIO扬声器输出的音频。如果此RemoteIO渲染回调不会在其回调缓冲区中放入任何数据,则可能会将任何留在缓冲区中的垃圾(以前可能在记录回调缓冲区中的内容)发送给扬声器。

此外,Apple DTS强烈建议您的recordingCallback不包括任何内存管理调用,例如malloc()。所以这可能是导致问题的错误。