AVAudioPlayerNode.scheduleFile()的completionHandler太早调用

时间:2015-04-03 06:26:28

标签: ios8 avaudioengine

我正在尝试在iOS 8中使用新的AVAudioEngine。

看起来player.scheduleFile()的completionHandler在声音文件播放完毕之前被称为

我正在使用长度为5秒的声音文件 - println() - 消息在声音结束前约1秒钟出现。

我做错了什么或者我是否误解了完成握手的想法?

谢谢!


以下是一些代码:

class SoundHandler {
    let engine:AVAudioEngine
    let player:AVAudioPlayerNode
    let mainMixer:AVAudioMixerNode

    init() {
        engine = AVAudioEngine()
        player = AVAudioPlayerNode()
        engine.attachNode(player)
        mainMixer = engine.mainMixerNode

        var error:NSError?
        if !engine.startAndReturnError(&error) {
            if let e = error {
                println("error \(e.localizedDescription)")
            }
        }

        engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0))
    }

    func playSound() {
        var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a")
        var soundFile = AVAudioFile(forReading: soundUrl, error: nil)

        player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") })

        player.play()
    }
}

7 个答案:

答案 0 :(得分:6)

我看到了同样的行为。

从我的实验中,我相信一旦缓冲区/段/文件被“安排”,就会调用回调,而不是在完成播放时调用。

虽然文档明确指出: “在缓冲区完全播放或播放器停止后调用。可能是零。”

所以我认为这是一个错误或不正确的文档。不知道哪个

答案 1 :(得分:6)

这似乎是一个错误,我们应该提交雷达提交! http://bugreport.apple.com

与此同时,作为一种解决方法,我注意到如果您改为使用scheduleBuffer:atTime:options:completionHandler:,则会按预期触发回调(播放完成后)。

示例代码:

AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length];
[file readIntoBuffer:buffer error:&error];

[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{
    // reminder: we're not on the main thread in here
    dispatch_async(dispatch_get_main_queue(), ^{
        NSLog(@"done playing, as expected!");
    });
}];

答案 2 :(得分:4)

使用AVAudioTime,您始终可以计算音频播放完成的未来时间。当前行为很有用,因为它支持在当前缓冲区/段/文件结束之前调度从回调中播放的其他缓冲区/段/文件,从而避免音频回放中的间隙。这使您可以创建一个简单的循环播放器,而无需大量工作。这是一个例子:

class Latch {
    var value : Bool = true
}

func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch {
    let looping = Latch()
    let frames = file.length

    let sampleRate = file.processingFormat.sampleRate
    var segmentTime : AVAudioFramePosition = 0
    var segmentCompletion : AVAudioNodeCompletionHandler!
    segmentCompletion = {
        if looping.value {
            segmentTime += frames
            player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
        }
    }
    player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
    segmentCompletion()
    player.play()

    return looping
}

上面的代码在调用player.play()之前调度整个文件两次。随着每个片段接近完成,它将来会安排另一个整个文件,以避免播放中的空白。要停止循环,可以使用返回值,即Latch,如下所示:

let looping = loopWholeFile(file, player)
sleep(1000)
looping.value = false
player.stop()

答案 3 :(得分:1)

我的错误报告已关闭,因为"按预期工作,"但Apple向我指出了iOS 11中scheduleFile,scheduleSegment和scheduleBuffer方法的新变体。这些变量添加了一个completionCallbackType参数,您可以使用该参数指定在完成播放时您希望完成回调:

[self.audioUnitPlayer
            scheduleSegment:self.audioUnitFile
            startingFrame:sampleTime
            frameCount:(int)sampleLength
            atTime:0
            completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
            completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) {
    // do something here
}];

documentation并没有说明这是如何运作的,但我测试了它,它对我有用。

我已经在iOS 8-10中使用此解决方法:

- (void)playRecording {
    [self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() {
        float totalTime = [self recordingDuration];
        float elapsedTime = [self recordingCurrentTime];
        float remainingTime = totalTime - elapsedTime;
        [self performSelector:@selector(doSomethingHere) withObject:nil afterDelay:remainingTime];
    }];
}

- (float)recordingDuration {
    float duration = duration = self.audioUnitFile.length / self.audioUnitFile.processingFormat.sampleRate;
    if (isnan(duration)) {
        duration = 0;
    }
    return duration;
}

- (float)recordingCurrentTime {
    AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime;
    AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime];
    AVAudioFramePosition sampleTime = playerTime.sampleTime;
    if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing
    sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here
    float time = sampleTime / self.audioUnitFile.processingFormat.sampleRate;
    self.audioUnitLastKnownTime = time;
    return time;
}

答案 4 :(得分:0)

是的,在文件(或缓冲区)完成之前会稍微调用它。如果从完成处理程序中调用[myNode stop],文件(或缓冲区)将无法完全完成。但是,如果你调用[myEngine stop],文件(或缓冲区)将完成到最后

答案 5 :(得分:0)

// audioFile here is our original audio

audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: {
        print("scheduleFile Complete")

        var delayInSeconds: Double = 0

        if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {

            if let rate = rate {
                delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate) / Double(rate!)
            } else {
                delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate)
            }
        }

        // schedule a stop timer for when audio finishes playing
        DispatchTime.executeAfter(seconds: delayInSeconds) {
            audioEngine.mainMixerNode.removeTap(onBus: 0)
            // Playback has completed
        }

    })

答案 6 :(得分:0)

从今天开始,在部署目标为12.4的项目中,在运行12.4.1的设备上,这是我们发现在播放完成后成功停止节点的方式:

// audioFile and playerNode created here ...

playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in
    os_log(.debug, log: self.log, "%@", "Completing playing sound effect: \(filePath) ...")

    DispatchQueue.main.async {
        os_log(.debug, log: self.log, "%@", "... now actually completed: \(filePath)")

        self.engine.disconnectNodeOutput(playerNode)
        self.engine.detach(playerNode)
    }
}

主要区别先前的答案是推迟在主线程(我想也是音频渲染线程?)上分离节点,而不是在回调线程上执行分离。