我的代码基于Learning Core Audio第6章中的代码。
此函数用于获取输入文件并将该文件中的数据包复制到输出文件。我正在使用它将许多输入文件的数据写入一个输出文件,并且它运行良好。
我遇到的唯一问题是当我添加一个新的输入文件并将startPacketPosition设置为输出文件中已经写入数据包的区域时。它用新数据替换旧数据。
有没有办法在不替换现有数据的情况下将新数据包写入文件。这就像为歌曲文件添加声音效果而不替换任何歌曲数据。
如果使用AudioFileWritePackets无法做到这一点,那么最佳选择是什么?
static void writeInputFileToOutputFile(AudioStreamBasicDescription *format, ExtAudioFileRef *inputFile, AudioFileID *outputFile, UInt32 *startPacketPosition) {
//determine the size of the output buffer
UInt32 outputBufferSize = 32 * 1024; //32kb
UInt32 sizePerPacket = format->mBytesPerPacket;
UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;
//allocate a buffer for recieving the data
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8) * outputBufferSize);
//read-convert-write
while (1) {
AudioBufferList convertedData; //create an audio buffer list
convertedData.mNumberBuffers = 1; //with only one buffer
//set the properties on the single buffer
convertedData.mBuffers[0].mNumberChannels = format->mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
//get the number of frames and buffer data from the input file
UInt32 framesPerBuffer = packetsPerBuffer;
CheckError(ExtAudioFileRead(*inputFile, &framesPerBuffer, &convertedData), "ExtAudioFileRead");
//if framecount is 0, were finished
if (framesPerBuffer == 0) {
return;
}
UInt32 bytes = format->mBytesPerPacket;
CheckError(AudioFileWritePackets(*outputFile, false, framesPerBuffer*bytes, NULL, *startPacketPosition, &framesPerBuffer, convertedData.mBuffers[0].mData), "AudioFileWritePackets");
//increase the ouput file packet position
*startPacketPosition += framesPerBuffer;
}
free(outputBuffer);
}
答案 0 :(得分:0)
我找到了一种使用AVMutableComposition实现此目的的方法
-(void)addInputAsset:(AVURLAsset *)input toOutputComposition:(AVMutableComposition *)composition atTime:(float)seconds {
//add the input to the composition
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipAudioTrack = [[input tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the start time
int sampleRate = 44100;
CMTime nextClipStartTime = CMTimeMakeWithSeconds(seconds, sampleRate);
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, input.duration) ofTrack:clipAudioTrack atTime:nextClipStartTime error:nil];
}
要使用此方法混合两个音频文件,您可以这样称呼它:
-(void)mixAudio {
//create a mutable compsition
AVMutableComposition* composition = [AVMutableComposition composition];
//audio file 1
NSURL *url1 = [NSURL URLWithString:@"path/to/file1.mp3"];
AVURLAsset* asset1 = [[AVURLAsset alloc]initWithURL:url1 options:nil];
[self addInputAsset:asset1 toOutputComposition:composition atTime:0.0];
//audio file 2
NSURL *url2 = [NSURL URLWithString:@"path/to/file2.aif"];
AVURLAsset* asset2 = [[AVURLAsset alloc]initWithURL:url2 options:nil];
[self addInputAsset:asset2 toOutputComposition:composition atTime:0.2];
//create the export session
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPresetAppleM4A];
if (exportSession == nil) {
//ERROR: abort
}
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:@"path/to/output_file.m4a"]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
//export the file
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(@"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
NSLog(@"AVAssetExportSessionStatusFailed");
} else {
NSLog(@"Export Session Status: %ld", (long)exportSession.status);
}
}];
}