AudioUnit输入缺少定期样本

时间:2014-04-30 00:27:13

标签: ios objective-c core-audio audiounit

我已经实现了一个AUGraph,其中包含一个AudioUnit,用于处理来自麦克风和耳机的IO。我遇到的问题是缺少音频输入块。

我相信在硬件到软件缓冲区交换期间样本丢失了。我试图将iPhone的采样率从44.1 kHz减慢到20 kHz,看看这是否会给我丢失的数据,但它没有产生我预期的输出。

AUGraph已设置,如下所示:

// Audio component description
AudioComponentDescription desc;
bzero(&desc, sizeof(AudioComponentDescription));
desc.componentType          = kAudioUnitType_Output;
desc.componentSubType       = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer  = kAudioUnitManufacturer_Apple;
desc.componentFlags         = 0;
desc.componentFlagsMask     = 0;

// Stereo ASBD
AudioStreamBasicDescription stereoStreamFormat;
bzero(&stereoStreamFormat, sizeof(AudioStreamBasicDescription));
stereoStreamFormat.mSampleRate          = kSampleRate;
stereoStreamFormat.mFormatID            = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags         = kAudioFormatFlagsCanonical;
stereoStreamFormat.mBytesPerPacket      = 4;
stereoStreamFormat.mBytesPerFrame       = 4;
stereoStreamFormat.mFramesPerPacket     = 1;
stereoStreamFormat.mChannelsPerFrame    = 2;
stereoStreamFormat.mBitsPerChannel      = 16;

OSErr err = noErr;
@try {
    // Create new AUGraph
    err = NewAUGraph(&auGraph);
    NSAssert1(err == noErr, @"Error creating AUGraph: %hd", err);

    // Add node to AUGraph
    err = AUGraphAddNode(auGraph,
                         &desc,
                         &ioNode);
    NSAssert1(err == noErr, @"Error adding AUNode: %hd", err);

    // Open AUGraph
    err = AUGraphOpen(auGraph);
    NSAssert1(err == noErr, @"Error opening AUGraph: %hd", err);

    // Add AUGraph node info
    err = AUGraphNodeInfo(auGraph,
                          ioNode,
                          &desc,
                          &_ioUnit);
    NSAssert1(err == noErr, @"Error adding noe info to AUGraph: %hd", err);

    // Enable input, which is disabled by default.
    UInt32 enabled = 1;
    err = AudioUnitSetProperty(_ioUnit,
                         kAudioOutputUnitProperty_EnableIO,
                         kAudioUnitScope_Input,
                         kInputBus,
                         &enabled,
                         sizeof(enabled));
    NSAssert1(err == noErr, @"Error enabling input: %hd", err);

    // Apply format to input of ioUnit
    err = AudioUnitSetProperty(_ioUnit,
                         kAudioUnitProperty_StreamFormat,
                         kAudioUnitScope_Input,
                         kOutputBus,
                         &stereoStreamFormat,
                         sizeof(stereoStreamFormat));
    NSAssert1(err == noErr, @"Error setting input ASBD: %hd", err);

    // Apply format to output of ioUnit
    err = AudioUnitSetProperty(_ioUnit,
                         kAudioUnitProperty_StreamFormat,
                         kAudioUnitScope_Output,
                         kInputBus,
                         &stereoStreamFormat,
                         sizeof(stereoStreamFormat));
    NSAssert1(err == noErr, @"Error setting output ASBD: %hd", err);

    // Set hardware IO callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = hardwareIOCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);
    err = AUGraphSetNodeInputCallback(auGraph,
                                      ioNode,
                                      kOutputBus,
                                      &callbackStruct);
    NSAssert1(err == noErr, @"Error setting IO callback: %hd", err);

    // Initialize AudioGraph
    err = AUGraphInitialize(auGraph);
    NSAssert1(err == noErr, @"Error initializing AUGraph: %hd", err);

    // Start audio unit
    err = AUGraphStart(auGraph);
    NSAssert1(err == noErr, @"Error starting AUGraph: %hd", err);

}
@catch (NSException *exception) {
    NSLog(@"Failed with exception: %@", exception);
}

其中kOutputBus定义为0,kInputBus为1,kSampleRate为44100.IO回调函数为:

IO回调功能

static OSStatus hardwareIOCallback(void                         *inRefCon,
                               AudioUnitRenderActionFlags   *ioActionFlags,
                               const AudioTimeStamp         *inTimeStamp,
                               UInt32                       inBusNumber,
                               UInt32                       inNumberFrames,
                               AudioBufferList              *ioData) {
    // Scope reference to GSFSensorIOController class
    GSFSensorIOController *sensorIO = (__bridge GSFSensorIOController *) inRefCon;

    // Grab the samples and place them in the buffer list
    AudioUnit ioUnit = sensorIO.ioUnit;

    OSStatus result = AudioUnitRender(ioUnit,
                                      ioActionFlags,
                                      inTimeStamp,
                                      kInputBus,
                                      inNumberFrames,
                                      ioData);

    if (result != noErr) NSLog(@"Blowing it in interrupt");

    // Process input data
    [sensorIO processIO:ioData];

    // Set up power tone attributes
    float freq = 20000.00f;
    float sampleRate = kSampleRate;
    float phase = sensorIO.sinPhase;
    float sinSignal;

    double phaseInc = 2 * M_PI * freq / sampleRate;

    // Write to output buffers
    for(size_t i = 0; i < ioData->mNumberBuffers; ++i) {
        AudioBuffer buffer = ioData->mBuffers[i];
        for(size_t sampleIdx = 0; sampleIdx < inNumberFrames; ++sampleIdx) {
            // Grab sample buffer
            SInt16 *sampleBuffer = buffer.mData;

            // Generate power tone on left channel
            sinSignal = sin(phase);
            sampleBuffer[2 * sampleIdx] = (SInt16)((sinSignal * 32767.0f) /2);

            // Write to commands to micro on right channel as necessary
            if(sensorIO.newDataOut)
                sampleBuffer[2*sampleIdx + 1] = (SInt16)((sinSignal * 32767.0f) /2);
            else
                 sampleBuffer[2*sampleIdx + 1] = 0;

            phase += phaseInc;
            if (phase >= 2 * M_PI * freq) {
                 phase -= (2 * M_PI * freq);
            }
        }
    }

    // Store sine wave phase for next callback
    sensorIO.sinPhase = phase;

    return result;
}

processIO内调用的hardwareIOCallback函数用于处理输入并为输出创建响应。出于调试目的,我只是将输入缓冲区的每个样本推送到NSMutableArray。

处理IO

- (void) processIO: (AudioBufferList*) bufferList {
    for (int j = 0 ; j < bufferList->mNumberBuffers ; j++) {
        AudioBuffer sourceBuffer = bufferList->mBuffers[j];
        SInt16 *buffer = (SInt16 *) bufferList->mBuffers[j].mData;

        for (int i = 0; i < (sourceBuffer.mDataByteSize / sizeof(sourceBuffer)); i++) {
            // DEBUG: Array of raw data points for printing to a file
            [self.rawInputData addObject:[NSNumber numberWithInt:buffer[i]]];
        }
    }
}

然后我在停止AUGraph后将此输入缓冲区的内容写入文件,并将所有样本放在数组rawInputData中。然后我在MatLab中打开这个文件并绘制它。在这里,我看到音频输入缺少数据(见下图中以红色圈出)。

Missing Data

我是如何解决这个问题的想法,并且真的可以帮助理解和解决这个问题。

1 个答案:

答案 0 :(得分:1)

你的回调可能太慢了。通常不建议在Audio Unit回调中使用任何Objective C方法(例如添加到可变数组或其他任何可以分配内存的方法)。