音频单元录制噪音

时间:2012-07-18 09:17:52

标签: ios core-audio audiounit

昨天我一直在努力解决这个问题,非常感谢你的帮助。

我有一个多声道混音器音频单元,分配给每个声道的回调在调用时填充所需的音频缓冲区。我试图通过将数据写入文件来记录相同的回调。

如果我不打电话给AudioUnitRender音频录制为噪音,如果我打电话给它,我会收到两个错误。错误10877和错误50。

回调中的录制代码如下所示

if (recordingOn) 
{
    AudioBufferList *bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));

    SInt16 samples[inNumberFrames]; 
    memset (&samples, 0, sizeof (samples));

    bufferList->mNumberBuffers = 1;
    bufferList->mBuffers[0].mData = samples;
    bufferList->mBuffers[0].mNumberChannels = 2;
    bufferList->mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);

    OSStatus status;
    status = AudioUnitRender(audioObject.mixerUnit,     
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);

    if (noErr != status) {
        printf("AudioUnitRender error: %ld", status); 
        return noErr;
    }

    ExtAudioFileWriteAsync(audioObject.recordingFile, inNumberFrames, bufferList);
}

在每个通道回调中写入数据是否正确,还是应该将其连接到远程I / O单元?

我正在使用LPCM而ASBD用于录制文件(caf)

recordingFormat.mFormatID = kAudioFormatLinearPCM;
recordingFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked;
recordingFormat.mSampleRate = 44100;
recordingFormat.mChannelsPerFrame = 2;
recordingFormat.mFramesPerPacket = 1;
recordingFormat.mBytesPerPacket = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBytesPerFrame = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBitsPerChannel = 16;

我不确定我做错了什么。

在写入文件之前,立体声如何影响录制数据的处理方式?

2 个答案:

答案 0 :(得分:2)

有几个问题。如果您尝试录制最终的“混音”,可以使用AudioUnitAddRenderNotify(iounit,callback,file)在I / O单元上添加回调。然后回调只需要获取ioData并将其传递给ExtAudioFileWriteAsync(...)。因此,您也不需要创建任何缓冲区。旁注:在渲染线程中分配内存很糟糕。您应该避免渲染回调中的所有系统调用。无法保证这些调用将在音频线程的非常紧迫的期限内执行。因此,为什么有一个ExtAudioFileWriteAsync,它会考虑到这一点并在另一个线程中写入磁盘。

答案 1 :(得分:0)

我找到了一个演示代码,可能有用4 U;

DEMO网址:https://github.com/JNYJdev/AudioUnit

OR

博客:http://atastypixel.com/blog/using-remoteio-audio-unit/

static OSStatus recordingCallback(void *inRefCon, 
                          AudioUnitRenderActionFlags *ioActionFlags, 
                          const AudioTimeStamp *inTimeStamp, 
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames, 
                          AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample

AudioBuffer buffer;

buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );

// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;

// Then:
// Obtain recorded samples

OSStatus status;

status = AudioUnitRender([iosAudio audioUnit], 
                     ioActionFlags, 
                     inTimeStamp, 
                     inBusNumber, 
                     inNumberFrames, 
                     &bufferList);
checkStatus(status);

// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];

// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);

return noErr;
}