我有一个连接到AVPlayerItem的AudioTapProcessor。
会打电话
static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
处理时。
我需要将AudioBufferList
转换为CMSampleBuffer
,以便我可以使用AVAssetWriterAudioInput.appendSampleBuffer
将其写入电影文件。
那么如何将AudioBufferList
转换为CMSampleBuffer
?我试过这个但得到-12731错误:错误cCMSampleBufferSetDataBufferFromAudioBufferList:可选(“ - 12731”)
func processAudioData(audioData: UnsafeMutablePointer<AudioBufferList>, framesNumber: UInt32) {
var sbuf : Unmanaged<CMSampleBuffer>?
var status : OSStatus?
var format: Unmanaged<CMFormatDescription>?
var formatId = UInt32(kAudioFormatLinearPCM)
var formatFlags = UInt32( kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked )
var audioFormat = AudioStreamBasicDescription(mSampleRate: 44100.00, mFormatID:formatId, mFormatFlags:formatFlags , mBytesPerPacket: 1, mFramesPerPacket: 1, mBytesPerFrame: 16, mChannelsPerFrame: 2, mBitsPerChannel: 2, mReserved: 0)
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format)
if status != noErr {
println("Error CMAudioFormatDescriptionCreater :\(status?.description)")
return
}
var timing = CMSampleTimingInfo(duration: CMTimeMake(1, 44100), presentationTimeStamp: kCMTimeZero, decodeTimeStamp: kCMTimeInvalid)
status = CMSampleBufferCreate(kCFAllocatorDefault,nil,Boolean(0),nil,nil,format?.takeRetainedValue(), CMItemCount(framesNumber), 1, &timing, 0, nil, &sbuf);
if status != noErr {
println("Error CMSampleBufferCreate :\(status?.description)")
return
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(sbuf?.takeRetainedValue(), kCFAllocatorDefault , kCFAllocatorDefault, 0, audioData)
if status != noErr {
println("Error cCMSampleBufferSetDataBufferFromAudioBufferList :\(status?.description)")
return
}
var currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sbuf?.takeRetainedValue());
println(" audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")
if !assetWriterAudioInput!.readyForMoreMediaData {
return
}else if assetWriter.status == .Writing {
if !assetWriterAudioInput!.appendSampleBuffer(sbuf?.takeRetainedValue()) {
println("Problem appending audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")
}
}else{
println("assetWriterStatus:\(assetWriter.status.rawValue), Error: \(assetWriter.error.localizedDescription)")
println("Could not write a frame")
}
}
答案 0 :(得分:2)
好的,我已成功解决了这个问题。
问题是我不应该自己构建AudioStreamBasicDescription
结构。但是使用准备回调AudioProcessorTap
提供的那个。
static void tap_PrepareCallback(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)
//retain this one