我将代码从swift转换为客观c:
func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) // given NSData audio format
let PCMBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.pointee.mBytesPerFrame)
PCMBuffer.frameLength = PCMBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))
data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.length)
return PCMBuffer
}
我转换为目标c:
-(AVAudioPCMBuffer*)toPCMBuffer: (NSData*)data {
AVAudioFormat * audioFormat = [[AVAudioFormat alloc]initWithCommonFormat:AVAudioPCMFormatFloat32 sampleRate:8000 channels:1 interleaved:false];
AVAudioPCMBuffer* PCMBuffer = [[AVAudioPCMBuffer alloc]initWithPCMFormat:audioFormat frameCapacity:data.length/audioFormat.streamDescription->mBytesPerFrame];
PCMBuffer.frameLength = PCMBuffer.frameCapacity;
float *channels = malloc(PCMBuffer.format.channelCount * sizeof(float)); // remember to free eventually
memcpy(channels, PCMBuffer.floatChannelData, PCMBuffer.format.channelCount * sizeof(float));
[data getBytes:channels length:data.length];
return PCMBuffer;
}
但我不确定纠正行代码:
AVAudioPCMBuffer* PCMBuffer = [[AVAudioPCMBuffer alloc]initWithPCMFormat:audioFormat frameCapacity:data.length/audioFormat.streamDescription->mBytesPerFrame];
这是我不知道如何将UInt32(data.length)
写入目标c
和
如何将UnsafeMutableRawPointer(channels[0])
转换为objc?
[data getBytes:channels length:data.length];
答案 0 :(得分:0)
原始代码的作用是将参数data
复制到PCMBuffer.floatChannelData
。
您的代码所做的是创建一个新的字节缓冲区,然后将该空缓冲区复制到PCMBuffer
并将参数数据复制到缓冲区中。
我认为它应该只是类似于:
PCMBuffer.frameLength = PCMBuffer.frameCapacity;
[data getBytes:PCMBuffer.floatChannelData length:data.length];
return PCMBuffer;