插入耳机或拔下耳机时,iOS应用程序崩溃

时间:2013-05-07 07:50:47

标签: ios audiounit headphones

我在iOS 6.1.3 iPad2和新iPad上运行SIP音频流应用程序。

我在iPad上启动了我的应用程序(没有插入任何内容) 音频工作。
我插上了耳机 应用程序崩溃: malloc:对象0x的错误....:未分配的指针或EXC_BAD_ACCESS

可替换地:

我在iPad上启动了我的应用程序(耳机已插入) 音频来自耳机。
我拔下耳机 应用程序崩溃: malloc:对象0x的错误....:未分配的指针或EXC_BAD_ACCESS

应用代码采用基于http://code.google.com/p/ios-coreaudio-example/示例代码的AudioUnit api(见下文)。

我使用kAudioSessionProperty_AudioRouteChange回调来获取更改感知。因此,OS声音管理器有三个回调:
1)处理记录的麦克风样本
2)为扬声器提供样品
3)通知音频硬件存在

经过大量测试,我的感觉是棘手的代码是执行麦克风捕获的代码。在插入/拔出动作之后,大多数时候在调用RouteChange之前调用记录回调几次导致以后的“分段错误”并且永远不会调用RouteChange回调。更具体地说,我认为AudioUnitRender函数会导致“内存访问不良”,而根本不会抛出异常。

我的感觉是非原子录制回调代码与OS声音设备相关结构的更新竞争。因此,非原子性的记录回调更可能是OS HW更新和记录回调的并发性。

我修改了我的代码以使录制回调尽可能地薄,但我的感觉是我的应用程序的其他线程带来的高处理负载正在为之前描述的并发竞争提供支持。因此,由于AudioUnitRender访问不良,代码的其他部分会出现malloc / free错误。

我试图通过以下方式减少录制回调延迟:

UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);

AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_MaximumFramesPerSlice,
    kAudioUnitScope_Global,
    0,
    &numFrames,
    dataSize);

我尝试增加有问题的代码:

dispatch_async(dispatch_get_main_queue(), ^{

有人有提示或解决方案吗? 为了重现错误,这里是我的音频会话代码:

//
//  IosAudioController.m
//  Aruts
//
//  Created by Simon Epskamp on 10/11/10.
//  Copyright 2010 __MyCompanyName__. All rights reserved.
//

#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1

IosAudioController* iosAudio;

void checkStatus(int status) {
    if (status) {
        printf("Status not 0! %d\n", status);
        // exit(1);
    }
}

/**
 * This callback is called when new audio data from the microphone is available.
 */
static OSStatus recordingCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {

    // Because of the way our audio format (setup below) is chosen:
    // we only need 1 buffer, since it is mono
    // Samples are 16 bits = 2 bytes.
    // 1 frame includes only 1 sample

    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    NSLog(@"Recording Callback 1 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // Then:
    // Obtain recorded samples

    OSStatus status;
    status = AudioUnitRender([iosAudio audioUnit],
        ioActionFlags, 
        inTimeStamp,
        inBusNumber,
        inNumberFrames,
        &bufferList);
        checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    NSLog(@"Recording Callback 2 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;
}

/**
 * This callback is called when the audioUnit needs new data to play through the
 * speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {
        // Notes: ioData contains buffers (may be more than one!)
        // Fill them up as much as you can.
        // Remember to set the size value in each 
        // buffer to match how much data is in the buffer.

    for (int i=0; i < ioData->mNumberBuffers; i++) {
        // in practice we will only ever have 1 buffer, since audio format is mono
        AudioBuffer buffer = ioData->mBuffers[i];

        // NSLog(@"  Buffer %d has %d channels and wants %d bytes of data.", i, 
            buffer.mNumberChannels, buffer.mDataByteSize);

        // copy temporary buffer data to output buffer
        UInt32 size = min(buffer.mDataByteSize,
            [iosAudio tempBuffer].mDataByteSize);

        // dont copy more data then we have, or then fits
        memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
        // indicate how much data we wrote in the buffer
        buffer.mDataByteSize = size;

        // uncomment to hear random noise
        /*
         * UInt16 *frameBuffer = buffer.mData;
         * for (int j = 0; j < inNumberFrames; j++) {
         *     frameBuffer[j] = rand();
         * }
         */
    }

    return noErr;
}

@implementation IosAudioController
@synthesize audioUnit, tempBuffer;

void propListener(void *inClientData,
    AudioSessionPropertyID inID,
    UInt32 inDataSize,
    const void *inData) {

    if (inID == kAudioSessionProperty_AudioRouteChange) {

        UInt32 isAudioInputAvailable;
        UInt32 size = sizeof(isAudioInputAvailable);
        CFStringRef newRoute;
        size = sizeof(CFStringRef);

        AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);

        if (newRoute) {
            CFIndex length = CFStringGetLength(newRoute);
            CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
                kCFStringEncodingUTF8);

            char *buffer = (char *)malloc(maxSize);
            CFStringGetCString(newRoute, buffer, maxSize,
                kCFStringEncodingUTF8);

            //CFShow(newRoute);
            printf("New route is %s\n",buffer);

            if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == 
                kCFCompareEqualTo) // headset plugged in
            {
                printf("Headset\n");
            } else {
                printf("Another device\n");

                UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
                AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                    sizeof (audioRouteOverride),&audioRouteOverride);
            }
            printf("New route is %s\n",buffer);
            free(buffer);
        }
        newRoute = nil;
    } 
}

/**
 * Initialize the audioUnit and allocate our own temporary buffer.
 * The temporary buffer will hold the latest data coming in from the microphone,
 * and will be copied to the output when this is requested.
 */
- (id) init {
    self = [super init];
    OSStatus status;

    // Initialize and configure the audio session
    AudioSessionInitialize(NULL, NULL, NULL, self);

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, 
        sizeof(audioCategory), &audioCategory);
    AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, 
        propListener, self);

    Float32 preferredBufferSize = .020;
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
        sizeof(preferredBufferSize), &preferredBufferSize);

    AudioSessionSetActive(true);

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = 
        kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Input, 
        kInputBus,
        &flag, 
        sizeof(flag));
        checkStatus(status);

    // Enable IO for playback
    flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Output, 
        kOutputBus,
        &flag, 
        sizeof(flag));

    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = 8000.00;
    //audioFormat.mSampleRate = 44100.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = 
        kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger | 
        kAudioFormatFlagIsPacked*/;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Output, 
        kInputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Input, 
        kOutputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        AudioOutputUnitProperty_SetInputCallback, 
        kAudioUnitScope_Global, 
        kInputBus, 
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback, 
        kAudioUnitScope_Global, 
        kOutputBus,
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to 
    // pass in our own)

    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_ShouldAllocateBuffer,
        kAudioUnitScope_Output, 
        kInputBus,
        &flag, 
        sizeof(flag)); 


    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_ShouldAllocateBuffer, 
        kAudioUnitScope_Output,
        kOutputBus,
        &flag,
        sizeof(flag));

    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per 
    // frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames,
    // if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(audioUnit);
    checkStatus(status);
}

/**
 * Stop the audioUnit
 */
- (void) stop {
    OSStatus status = AudioOutputUnitStop(audioUnit);
    checkStatus(status);
}

/**
 * Change this function to decide what is done with incoming
 * audio data from the microphone.
 * Right now we copy it to our own temporary buffer.
 */
- (void) processAudio: (AudioBufferList*) bufferList {
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    // fix tempBuffer size if it's the wrong size
    if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        free(tempBuffer.mData);
        tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }

    // copy incoming audio data to temporary buffer
    memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, 
        bufferList->mBuffers[0].mDataByteSize);
    usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY

}

/**
 * Clean up.
 */
- (void) dealloc {
    [super dealloc];
    AudioUnitUninitialize(audioUnit);
    free(tempBuffer.mData);
}

@end

1 个答案:

答案 0 :(得分:8)

根据我的测试,触发SEGV错误的行最终

AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                                    sizeof (audioRouteOverride),&audioRouteOverride);

在飞行途中更改AudioUnit链的属性总是很棘手,但是如果在重新路由之前停止AudioUnit 并再次启动它,它将完成它已存储的所有缓冲区的使用,然后继续使用新参数。

这是否可以接受,或者您在路线更改和重新开始录制之间是否需要更少的差距?

我做的是:

void propListener(void *inClientData,
              AudioSessionPropertyID inID,
              UInt32 inDataSize,
              const void *inData) {

[iosAudio stop];
// ...

[iosAudio start];
}

我的iPhone 5不再崩溃(您的里程可能因硬件而异)

我对这种行为的最合乎逻辑的解释是,这些测试在某种程度上得到支持,它是渲染管道是异步的。如果你永远地操纵缓冲区,它们就会排在队列中。但是,如果更改AudioUnit的设置,则会在渲染队列中触发具有未知副作用的质量重置。麻烦的是,这些变化是同步的,这会以一种追溯的方式影响所有异步调用,耐心等待轮到他们。

如果您不关心错过的样本,可以执行以下操作:

static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

static OSStatus playbackCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

// ...

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    checkStatus(status);

    isStopped = NO;
}

/**
 * Stop the audioUnit
 */
- (void) stop {

    isStopped = YES;

    OSStatus status = AudioOutputUnitStop(_audioUnit);
    checkStatus(status);
}

// ...