大家好。
过去3周,我一直在尝试解决通过蓝牙在macOS上均衡简单音频流所遇到的问题。
设置基本上是这样的: 播放器->均衡器->调音台->蓝牙音频设备 每当我使用有线音频设备(HDMI,内置扬声器,3.5毫米插孔等)时,一切都可以正常工作。 如果我乍一看尝试使用蓝牙设备(尝试了很多廉价和超贵的耳机),一切都可以正常工作,但是当您开始增加Bands的增益时,所产生的音频会变得非常失真。 在具有相同增益值的有线连接上,一切正常,增益适当增加且听起来不错。 起初,我认为这与采样率有关,但是经过一段时间的测试,我绝对可以认为这不是问题。
我在Swift Playground中收集了一个简单的示例,任何人都可以尝试(您需要将track.mp3文件添加到Playground资源中):
import Cocoa
import AVFoundation
func getCurrentOutputDeviceId () -> AudioDeviceID {
var addr = AudioObjectPropertyAddress(
mSelector: kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMaster)
var id: AudioObjectID = kAudioDeviceUnknown
var size = UInt32(MemoryLayout.size(ofValue: id))
AudioObjectGetPropertyData(
AudioObjectID(kAudioObjectSystemObject),
&addr,
0,
nil,
&size,
&id
)
return id
}
let engine = AVAudioEngine()
// File
let path = Bundle.main.path(forResource: "track", ofType: "mp3")!
let url = URL(fileURLWithPath: path)
let file = try! AVAudioFile(forReading: url)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(pcmFormat: fileFormat, frameCapacity: frameCount)!
try! file.read(into: buffer, frameCount: frameCount)
let player = AVAudioPlayerNode()
let eq = AVAudioUnitEQ(numberOfBands: 3)
eq.globalGain = 0
eq.bypass = false
let gain = Float32(23.0)
let band1 = eq.bands[0]
band1.filterType = .parametric
band1.bandwidth = 0.5
band1.gain = gain
band1.frequency = 32.0
band1.bypass = false
let band2 = eq.bands[1]
band2.filterType = .parametric
band2.bandwidth = 0.5
band2.gain = gain
band2.frequency = 64.0
band2.bypass = false
let band3 = eq.bands[2]
band3.filterType = .parametric
band3.bandwidth = 0.5
band3.gain = gain
band3.frequency = 125.0
band3.bypass = false
var converter = AVAudioMixerNode()
// Why do I have to do this to force playback through an already selected device?
// I don't need to do this on Non Bluetooth Devices.
// Also sometimes works if you switch to internal Microphone rather than the Bluetooth Device Mic
try! engine.outputNode.auAudioUnit.setDeviceID(getCurrentOutputDeviceId())
let deviceFormat = engine.outputNode.outputFormat(forBus: 0)
engine.attach(player)
engine.attach(eq)
engine.attach(converter)
engine.connect(player, to: eq, format: fileFormat)
engine.connect(eq, to:converter, format: fileFormat)
engine.connect(converter, to: engine.mainMixerNode, format: deviceFormat)
engine.prepare()
try! engine.start()
print(engine)
player.play()
player.scheduleBuffer(buffer, at: AVAudioTime(hostTime: 0), options:.loops, completionHandler: nil)
请在有线与无线设备上尝试一下,您会立即发现区别。
在痛苦的调试过程中,我注意到了几件事:
看看第66-69行,我只需要在使用蓝牙设备时执行此操作,否则,会有一个麦克风反馈循环持续半秒钟,并且此后没有声音。
如果我进入“系统偏好设置”->“声音”并切换到“输入”选项卡,则每当我选择了蓝牙耳机时,都可以进入麦克风反馈回路(仅当选择了相同的蓝牙设备“麦克风输入”时)
我的设置:
macOS 10.14(18A391) Xcode 10.1(10B61)
macOS SDK 10.14
由于不推荐使用AUGraph,我正尝试将我的https://github.com/nodeful/eqMac2应用重写为Swift和AVAudioEngine。 这个错误使我退缩了很多,很想解决这个问题。
问候,
罗马
P.S。我还在Apple Dev论坛上问过同样的问题:https://forums.developer.apple.com/message/348717,以防万一有人跟进。