如何将音频效果应用于文件并写入文件系统 - iOS

时间:2017-08-21 14:29:09

标签: ios swift audio avfoundation audiokit

我正在构建一个应该允许用户将音频过滤器应用于录制音频的应用,例如Reverb,Boost。

我无法找到有关如何将过滤器应用于文件本身的任何可行信息来源,因为以后需要将处理后的文件上传到服务器。

我目前正在使用AudioKit进行可视化,并且我知道它能够进行音频处理,但仅限于播放。请提供任何进一步研究的建议。

2 个答案:

答案 0 :(得分:7)

AudioKit有一个不需要iOS 11的离线渲染节点。这是一个例子,因为AKAudioPlayer的底层AVAudioPlayerNode将阻止调用,所以需要player.schedule(...)和player.start(at。)位。如果你用player.play()启动它,则线程等待下一个渲染。

import UIKit
import AudioKit

class ViewController: UIViewController {

    var player: AKAudioPlayer?
    var reverb = AKReverb()
    var boost = AKBooster()
    var offlineRender = AKOfflineRenderNode()

    override func viewDidLoad() {
        super.viewDidLoad()

        guard let url = Bundle.main.url(forResource: "theFunkiestFunkingFunk", withExtension: "mp3") else {
            return
        }
        var audioFile: AKAudioFile?
        do {
            audioFile = try AKAudioFile.init(forReading: url)
            player = try AKAudioPlayer.init(file: audioFile!)
        } catch {
            print(error)
            return
        }
        guard let player = player else {
            return
        }


        player >>> reverb >>> boost >>> offlineRender

        AudioKit.output = offlineRender
        AudioKit.start()


        let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
        let dstURL = docs.appendingPathComponent("rendered.caf")

        offlineRender.internalRenderEnabled = false
        player.schedule(from: 0, to: player.duration, avTime: nil)
        let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
        player.play(at: sampleTimeZero)
        do {
            try offlineRender.renderToURL(dstURL, seconds: player.duration)
        } catch {
            print(error)
            return
        }
        offlineRender.internalRenderEnabled = true

        print("Done! Rendered to " + dstURL.path)
    }
}

答案 1 :(得分:5)

您可以使用Audio Unit插件中新推出的“手动渲染”功能(参见下面的示例)。

如果您需要支持较旧的macOS / iOS版本,如果您无法使用 AudioKit (即使我自己没有尝试过),我会感到惊讶。例如,使用AKSamplePlayer作为您的第一个节点(将读取您的音频文件),然后构建并连接您的效果并使用AKNodeRecorder作为您的最后一个节点。

使用新音频单元功能

的手动渲染示例
import AVFoundation

//: ## Source File
//: Open the audio file to process
let sourceFile: AVAudioFile
let format: AVAudioFormat
do {
    let sourceFileURL = Bundle.main.url(forResource: "mixLoop", withExtension: "caf")!
    sourceFile = try AVAudioFile(forReading: sourceFileURL)
    format = sourceFile.processingFormat
} catch {
    fatalError("could not open source audio file, \(error)")
}

//: ## Engine Setup
//:    player -> reverb -> mainMixer -> output
//: ### Create and configure the engine and its nodes
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
let reverb = AVAudioUnitReverb()

engine.attach(player)
engine.attach(reverb)

// set desired reverb parameters
reverb.loadFactoryPreset(.mediumHall)
reverb.wetDryMix = 50

// make connections
engine.connect(player, to: reverb, format: format)
engine.connect(reverb, to: engine.mainMixerNode, format: format)

// schedule source file
player.scheduleFile(sourceFile, at: nil)
//: ### Enable offline manual rendering mode
do {
    let maxNumberOfFrames: AVAudioFrameCount = 4096 // maximum number of frames the engine will be asked to render in any single render call
    try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames)
} catch {
    fatalError("could not enable manual rendering mode, \(error)")
}
//: ### Start the engine and player
do {
    try engine.start()
    player.play()
} catch {
    fatalError("could not start engine, \(error)")
}
//: ## Offline Render
//: ### Create an output buffer and an output file
//: Output buffer format must be same as engine's manual rendering output format
let outputFile: AVAudioFile
do {
    let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
    let outputURL = URL(fileURLWithPath: documentsPath + "/mixLoopProcessed.caf")
    outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
} catch {
    fatalError("could not open output audio file, \(error)")
}

// buffer to which the engine will render the processed data
let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
//: ### Render loop
//: Pull the engine for desired number of frames, write the output to the destination file
while engine.manualRenderingSampleTime < sourceFile.length {
    do {
        let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime))
        let status = try engine.renderOffline(framesToRender, to: buffer)
        switch status {
        case .success:
            // data rendered successfully
            try outputFile.write(from: buffer)

        case .insufficientDataFromInputNode:
            // applicable only if using the input node as one of the sources
            break

        case .cannotDoInCurrentContext:
            // engine could not render in the current render call, retry in next iteration
            break

        case .error:
            // error occurred while rendering
            fatalError("render failed")
        }
    } catch {
        fatalError("render failed, \(error)")
    }
}

player.stop()
engine.stop()

print("Output \(outputFile.url)")
print("AVAudioEngine offline rendering completed")

您可以找到有关AudioUnit格式there更新的更多文档和示例。