我有两个课程,MicrophoneHandler
和AudioPlayer
。我已设法使用AVCaptureSession
使用已批准的答案here来点按麦克风数据,并使用此功能将CMSampleBuffer
转换为NSData
:
func sendDataToDelegate(buffer: CMSampleBuffer!)
{
let block = CMSampleBufferGetDataBuffer(buffer)
var length = 0
var data: UnsafeMutablePointer<Int8> = nil
var status = CMBlockBufferGetDataPointer(block!, 0, nil, &length, &data) // TODO: check for errors
let result = NSData(bytesNoCopy: data, length: length, freeWhenDone: false)
self.delegate.handleBuffer(result)
}
我现在想通过将上面生成的NSData
转换为AVAudioPCMBuffer
并使用AVAudioEngine
播放来播放音频。我的AudioPlayer
课程如下:
var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!
override init()
{
super.init()
self.setup()
self.start()
}
func handleBuffer(data: NSData)
{
let newBuffer = self.toPCMBuffer(data)
print(newBuffer)
self.playerNode.scheduleBuffer(newBuffer, completionHandler: nil)
}
func setup()
{
self.engine = AVAudioEngine()
self.playerNode = AVAudioPlayerNode()
self.engine.attachNode(self.playerNode)
self.mixer = engine.mainMixerNode
engine.connect(self.playerNode, to: self.mixer, format: self.mixer.outputFormatForBus(0))
}
func start()
{
do {
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
self.playerNode.play()
}
func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer
{
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 2, interleaved: false) // given NSData audio format
let PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.memory.mBytesPerFrame)
PCMBuffer.frameLength = PCMBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))
data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length)
return PCMBuffer
}
在上面的第一个代码段中调用handleBuffer:buffer
时,缓冲区会达到self.delegate.handleBuffer(result)
函数。
我能够print(newBuffer)
,并查看已转换缓冲区的内存位置,但扬声器没有任何内容。我只能想象来往NSData
的转换之间的某些内容不一致。有任何想法吗?提前谢谢。
答案 0 :(得分:2)
NSData
格式为什么不一直使用AVAudioPlayer
?如果您确实需要NSData
,则始终可以从下面的soundURL
加载此类数据。在此示例中,磁盘缓冲区类似于:
let soundURL = documentDirectory.URLByAppendingPathComponent("sound.m4a")
无论如何直接记录到文件以获得最佳内存和资源管理是有意义的。您可以通过这种方式从录制中获得NSData
:
let data = NSFileManager.defaultManager().contentsAtPath(soundURL.path())
以下代码就是您所需要的:
<强>记录强>
if !audioRecorder.recording {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(true)
audioRecorder.record()
} catch {}
}
播放强>
if (!audioRecorder.recording){
do {
try audioPlayer = AVAudioPlayer(contentsOfURL: audioRecorder.url)
audioPlayer.play()
} catch {}
}
<强>设置强>
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try audioRecorder = AVAudioRecorder(URL: self.directoryURL()!,
settings: recordSettings)
audioRecorder.prepareToRecord()
} catch {}
设置强>
let recordSettings = [AVSampleRateKey : NSNumber(float: Float(44100.0)),
AVFormatIDKey : NSNumber(int: Int32(kAudioFormatMPEG4AAC)),
AVNumberOfChannelsKey : NSNumber(int: 1),
AVEncoderAudioQualityKey : NSNumber(int: Int32(AVAudioQuality.Medium.rawValue))]
下载Xcode项目:
你可以找到这个例子here。从Swift Recipes下载整个项目,该项目在模拟器和设备上进行记录和播放。