在iOS上从内存数据流播放音频

时间:2018-06-22 22:44:08

标签: ios audio avfoundation

我正在将音频库移植到iOS,以便播放从回调提供的音频流。用户提供了一个返回原始PCM数据的回调,并且我需要播放此数据。而且,该库必须能够一次播放多个流。

我认为我需要使用AVFoundation,但似乎AVAudioPlayer不支持流音频缓冲区,并且所有流文档都可以找到直接来自网络的数据。我应该在这里使用什么API?

谢谢!

顺便说一句,我不是通过Swift或Objective-C使用Apple库。但是我认为一切仍然暴露无遗,因此无论如何都应该感谢Swift中的示例!

1 个答案:

答案 0 :(得分:0)

您需要初始化:

  1. 要使用输入音频单元和输出的音频会话。

    $js=<<< JS
    $(document).on('pjax:complete', function() {
        $('#myTable').DataTable();
    });
    JS;
    $this->registerScript($js,\yii\web\View::POS_READY);
    
  2. 音频引擎

    -(SInt32) audioSessionInitialization:(SInt32)preferred_sample_rate {
    
        // - - - - - - Audio Session initialization
        NSError *audioSessionError = nil;
        session = [AVAudioSession sharedInstance];
    
        // disable AVAudioSession
        [session setActive:NO error:&audioSessionError];
    
        // set category - (PlayAndRecord to use input and output session AudioUnits)
       [session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&audioSessionError];
    
       double preferredSampleRate = 441000;
       [session setPreferredSampleRate:preferredSampleRate error:&audioSessionError];
    
       // enable AVAudioSession
       [session setActive:YES error:&audioSessionError];
    
    
       // Configure notification for device output change (speakers/headphones)
       [[NSNotificationCenter defaultCenter] addObserver:self
                                         selector:@selector(routeChange:)
                                             name:AVAudioSessionRouteChangeNotification
                                           object:nil];
    
    
       // - - - - - - Create audio engine
       [self audioEngineInitialization];
    
       return [session sampleRate];
     }
    
  3. 音频引擎回调

    -(void) audioEngineInitialization{
    
        engine = [[AVAudioEngine alloc] init];
        inputNode = [engine inputNode];
        outputNode = [engine outputNode];
    
        [engine connect:inputNode to:outputNode format:[inputNode inputFormatForBus:0]];
    
    
        AudioStreamBasicDescription asbd_player;
        asbd_player.mSampleRate        = session.sampleRate;
        asbd_player.mFormatID            = kAudioFormatLinearPCM;
        asbd_player.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        asbd_player.mFramesPerPacket    = 1;
        asbd_player.mChannelsPerFrame    = 2;
        asbd_player.mBitsPerChannel    = 16;
        asbd_player.mBytesPerPacket    = 4;
        asbd_player.mBytesPerFrame        = 4;
    
        OSStatus status;
        status = AudioUnitSetProperty(inputNode.audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  0,
                                  &asbd_player,
                                  sizeof(asbd_player));
    
    
        // Add the render callback for the ioUnit: for playing
        AURenderCallbackStruct callbackStruct;
        callbackStruct.inputProc = engineInputCallback; ///CALLBACK///
        callbackStruct.inputProcRefCon = (__bridge void *)(self);
        status = AudioUnitSetProperty(inputNode.audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Input,//Global
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    
        [engine prepare];
    }