使用MultiChannelMixer&amp ;;平移单声道信号MTAudioProcessingTap

时间:2016-04-13 17:53:58

标签: ios avfoundation core-audio

我正在寻找使用MTAudioProcessingTap和多声道混音器音频单元来平移单声道信号,但我正在获得单声道输出而不是平移立体声输出。 documentation州:

  

“多通道混音器单元(子类型   kAudioUnitSubType_MultiChannelMixer)可以使用任意数量的单声道或   立体声流并将它们组合成一个立体声输出。“

因此,单声道输出是出乎意料的。有什么方法吗?我通过完全相同的代码运行立体声信号,一切都很好:立体声输出,按预期平移。这是我点击准备回调的代码:

static void tap_PrepareCallback(MTAudioProcessingTapRef tap,
                                CMItemCount maxFrames,
                                const AudioStreamBasicDescription *processingFormat) {

    AVAudioTapProcessorContext *context = (AVAudioTapProcessorContext *)MTAudioProcessingTapGetStorage(tap);

    // Store sample rate for -setCenterFrequency:.
    context->sampleRate = processingFormat->mSampleRate;

    /* Verify processing format (this is not needed for Audio Unit, but for RMS calculation). */
    context->supportedTapProcessingFormat = true;

    if (processingFormat->mFormatID != kAudioFormatLinearPCM) {
        NSLog(@"Unsupported audio format ID for audioProcessingTap. LinearPCM only.");
        context->supportedTapProcessingFormat = false;
    }

    if (!(processingFormat->mFormatFlags & kAudioFormatFlagIsFloat)) {
        NSLog(@"Unsupported audio format flag for audioProcessingTap. Float only.");
        context->supportedTapProcessingFormat = false;
    }

    if (processingFormat->mFormatFlags & kAudioFormatFlagIsNonInterleaved) {
        context->isNonInterleaved = true;
    }


    AudioUnit audioUnit;

    AudioComponentDescription audioComponentDescription;
    audioComponentDescription.componentType = kAudioUnitType_Mixer;
    audioComponentDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
    audioComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
    audioComponentDescription.componentFlags = 0;
    audioComponentDescription.componentFlagsMask = 0;

    AudioComponent audioComponent = AudioComponentFindNext(NULL, &audioComponentDescription);
    if (audioComponent) {
        if (noErr == AudioComponentInstanceNew(audioComponent, &audioUnit)) {
            OSStatus status = noErr;

            // Set audio unit input/output stream format to processing format.
            if (noErr == status) {
                status = AudioUnitSetProperty(audioUnit,
                                              kAudioUnitProperty_StreamFormat,
                                              kAudioUnitScope_Input,
                                              0,
                                              processingFormat,
                                              sizeof(AudioStreamBasicDescription));
            }

            if (noErr == status) {
                status = AudioUnitSetProperty(audioUnit,
                                              kAudioUnitProperty_StreamFormat,
                                              kAudioUnitScope_Output,
                                              0,
                                              processingFormat,
                                              sizeof(AudioStreamBasicDescription));
            }

            // Set audio unit render callback.
            if (noErr == status) {
                AURenderCallbackStruct renderCallbackStruct;
                renderCallbackStruct.inputProc = AU_RenderCallback;
                renderCallbackStruct.inputProcRefCon = (void *)tap;
                status = AudioUnitSetProperty(audioUnit,
                                              kAudioUnitProperty_SetRenderCallback,
                                              kAudioUnitScope_Input,
                                              0,
                                              &renderCallbackStruct,
                                              sizeof(AURenderCallbackStruct));
            }

            // Set audio unit maximum frames per slice to max frames.
            if (noErr == status) {
                UInt32 maximumFramesPerSlice = (UInt32)maxFrames;
                status = AudioUnitSetProperty(audioUnit,
                                              kAudioUnitProperty_MaximumFramesPerSlice,
                                              kAudioUnitScope_Global,
                                              0,
                                              &maximumFramesPerSlice,
                                              (UInt32)sizeof(UInt32));
            }

            // Initialize audio unit.
            if (noErr == status) {
                status = AudioUnitInitialize(audioUnit);
            }

            if (noErr != status) {
                AudioComponentInstanceDispose(audioUnit);
                audioUnit = NULL;
            }
            context->audioUnit = audioUnit;
        }
    }
    NSLog(@"Tap channels: %d",processingFormat->mChannelsPerFrame); // = 1 for mono source file
}

我为输出流格式尝试了一些不同的选项,例如AVAudioFormat *outFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:processingFormat->mSampleRate channels:2];,但每次都会出现此错误:“客户端没有看到20个I / O周期;放弃了。”这里的代码创建了与输入格式完全相同的ASBD,除了2个通道而不是1个通道,这也给出了相同的“20个I / O周期”错误:

AudioStreamBasicDescription asbd;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = 0x29;
asbd.mSampleRate = 44100;
asbd.mBitsPerChannel = 32;
asbd.mChannelsPerFrame = 2;
asbd.mBytesPerFrame = 4;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerPacket = 4;
asbd.mReserved = 0;

0 个答案:

没有答案