如何在iPhone中使用AudioBuffer编写从麦克风本地录制的音频文件?

时间:2013-11-18 09:01:11

标签: ios iphone audio audiobuffer

我是音频框架的新手,有人帮我编写通过麦克风捕捉播放的音频文件吗?

下面是通过iphone扬声器播放麦克风输入的代码,现在我想将音频保存在iphone中以备将来使用。

我从这里找到了使用麦克风http://www.stefanpopp.de/2011/capture-iphone-microphone/

录制音频的代码
/**

Code start from here for playing the recorded voice 

*/

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData) {    

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) { 
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 

         // get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
    audioErr = AudioFileWriteBytes (recInfo.recordFile,
                                    false,
                                    recInfo.inStartingByte,
                                    &size,
                                    &buffer.mData);
    assert (audioErr == noErr);
    // increment our byte count
    recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
    audioProcessor.audioRecorder = recInfo;

     }
    }

    return noErr;
}

- (无效)prepareAudioFileToRecord {

NSArray *paths =             NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
//    long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
//    NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];


NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileCAFType,
                                  &audioFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);


assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;

}

提前谢谢 巴拉

2 个答案:

答案 0 :(得分:8)

要将AudioBuffer中的字节写入本地文件,我们需要 AudioFoolServices link类的帮助,该类包含在 AudioToolbox 框架中。< / p>

从概念上讲,我们将执行以下操作 - 设置音频文件并维护对它的引用(我们需要此引用可从您在帖子中包含的渲染回调中访问)。我们还需要跟踪每次调用回调时写入的字节数。最后一个标志来检查是否会让我们知道停止写入文件并关闭文件。

因为你提供的链接中的代码声明了一个 AudioStreamBasicDescription ,它是LPCM,因此是恒定的比特率,我们可以使用 AudioFileWriteBytes 功能(写压缩音频更多涉及并将使用AudioFileWritePackets函数)。

让我们首先声明一个自定义结构(,其中包含我们需要的所有额外数据)并添加此自定义结构的实例变量,并创建一个指向结构变量的属性。我们将此添加到 AudioProcessor 自定义类,因为您已经可以在此行中进行类型转换的回调中访问此对象。

AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

将此添加到 AudioProcessor.h (在@interface上方)

typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;

现在让我们添加一个实例变量,并将其作为指针属性并将其分配给实例变量(,以便我们可以从回调函数中访问它)。 在@interface中添加一个名为 audioRecorder 的实例变量,并使ASBD可用于该类。

Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class

方法 - (void)initializeAudio 注释掉或删除此行,因为我们已将recordFormat设为ivar。

//AudioStreamBasicDescription recordFormat;

现在将 kAudioFormatFlagIsBigEndian 格式标志添加到ASBD的设置位置。

// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
    recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

最后将其添加为属性,该属性是指向 audioRecorder 实例变量的指针,不要忘记在 AudioProcessor.m 中合成它。我们将指针属性命名为 audioRecorderPointer

@property Recorder *audioRecorderPointer;

// in .m synthesise the property
@synthesize audioRecorderPointer;

现在让我们将指针分配给ivar(这可以放在 AudioProcessor 类的 - (void)initializeAudio 方法中)

// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;

现在在 AudioProcessor.m 中,让我们添加一个方法来设置文件并打开它,以便我们可以写入文件。这应该在你开始运行AUGraph之前调用。

-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
    NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
    NSString *fileName = @"test_recording.aif";
    NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
    NSURL *fileURL = [NSURL fileURLWithPath:filePath];

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileAIFFType,
                                  recordFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}

好的,我们快到了。现在我们有一个要写入的文件,以及可以从渲染回调访问的 AudioFileID 。所以在你发布的回调函数中,在方法结束时返回noErr之前添加以下内容。

// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
                                false,
                                recInfo->inStartingByte,
                                &size,
                                buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}

当我们想要停止录制时(可能由某些用户操作调用),只需将运行布尔值设为false并在AudioProcessor类中的某个位置关闭此文件。

audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);

编辑:样本的字节顺序必须是文件的大端,因此将 kAudioFormatFlagIsBigEndian 位掩码标志添加到在相关链接中找到的源代码中的ASBD。

有关此主题的额外信息,Apple文档是一个很好的资源,我还建议阅读Chris Adamson和Kevin Avila的“学习核心音频”(我拥有一份副本)。

答案 1 :(得分:1)

使用音频队列服务。

Apple文档中有一个例子可以满足您的要求:

Audio Queue Services Programming Guide - Recording Audio