我已经使用音频单元在openframeworks和本网站http://atastypixel.com/blog/using-remoteio-audio-unit的帮助下成功地将麦克风中的音频录制到音频文件中。
我希望能够将文件流回音频单元并播放音频。根据{{3}},我可以使用ExtAudioFileOpenURL和ExtAudioFileRead。但是,如何在缓冲区中播放音频数据?
这就是我目前所拥有的:
static OSStatus setupAudioFileRead() {
//construct the file destination URL
CFURLRef destinationURL = audioSystemFileURL();
OSStatus status = ExtAudioFileOpenURL(destinationURL, &audioFileRef);
CFRelease(destinationURL);
if (checkStatus(status)) { ofLog(OF_LOG_ERROR, "ofxiPhoneSoundStream: Couldn't open file to read"); return status; }
while( TRUE ) {
// Try to fill the buffer to capacity.
UInt32 framesRead = 8000;
status = ExtAudioFileRead( audioFileRef, &framesRead, &inputBufferList );
// error check
if( checkStatus(status) ) { break; }
// 0 frames read means EOF.
if( framesRead == 0 ) { break; }
//play audio???
}
return noErr;
}
答案 0 :(得分:0)
来自这位作者:http://atastypixel.com/blog/using-remoteio-audio-unit/,如果您向下滚动到播放部分,请尝试以下操作:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
// copy from your whatever buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, your buffer.size);
memcpy(buffer.mData, your buffer, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// To test if your Audio Unit setup is working - comment out the three
// lines above and uncomment the for loop below to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}
如果您只是在寻找从MIC到文件的录音并进行播放,Apple的Speakhere样本可能更容易使用。
答案 1 :(得分:0)
基本上, 1.创建RemoteIO单元(请参阅有关如何创建RemoteIO的参考资料);
创建一个FilePlayer音频单元,它是一个专用音频单元,用于读取音频文件并将文件中的音频数据提供给输出单元,例如,在步骤1中创建的RemoteIO单元。要实际使用FilePlayer,需要在它上面进行很多设置(指定要播放的文件,要播放的文件的哪个部分等);
设置RemoteIO单元的kAudioUnitProperty_SetRenderCallback和kAudioUnitProperty_StreamFormat属性。第一个属性本质上是一个回调函数,RemoteIO单元从该函数中提取音频数据并播放它。必须根据FilePlayer支持的StreamFormat设置第二个属性。它可以从FilePlayer上调用的get-property函数派生。
在步骤3中定义回调集,其中最重要的事情是要求FilePlayer渲染到回调提供的缓冲区,您需要在FilePlayer上调用AudioUnitRender()。
最后启动RemoteIO设备播放文件。
以上只是在iOS上使用音频单元播放文件的基本操作的初步概述。您可以参考Chris Adamson和Kevin Avila的学习核心音频了解详情。
答案 2 :(得分:0)
这是一种利用Tasty Pixel博客中提到的音频单元的相对简单的方法。在录制回调中,您可以使用ExtAudioFileRead从文件中填充数据,而不是使用麦克风中的数据填充缓冲区。我将尝试粘贴下面的示例。请注意,这只适用于.caf文件。
在start方法中调用readAudio或initAudioFile函数,该函数只获取有关该文件的所有信息。
- (void) start {
readAudio();
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}
现在,在readAudio方法中,您可以初始化音频文件引用。
ExtAudioFileRef fileRef;
void readAudio() {
NSString * name = @"AudioFile";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
CheckError(err, "ExtAudioFileSetProperty");
}
现在您手头有音频数据,下一步非常简单。在recordingCallback中,从文件而不是麦克风中读取数据。
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus err = ExtAudioFileRead(fileRef, &inNumberFrames, &bufferList);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
这对我有用。