Mac OS X简单录音机

时间:2011-11-12 00:59:08

标签: xcode macos cocoa core-audio avaudiorecorder

有没有人为Mac OS X的SIMPLE录音机提供一些示例代码?我只想录制来自MacBook Pro内置麦克风的声音并将其保存到文件中。就这些。

我一直在搜索几个小时,是的,有一些例子可以录制语音并将其保存到http://developer.apple.com/library/mac/#samplecode/MYRecorder/Introduction/Intro.html等文件中。 Mac OS X的示例代码似乎比iPhone的类似示例代码复杂10倍。

对于iOS,命令很简单:

soundFile =[NSURL FileURLWithPath:[tempDir stringByAppendingString:@"mysound.cap"]];
soundSetting = [NSDictionary dictionaryWithObjectsAndKeys: // dictionary setting code left out goes here
soundRecorder = [[AVAudioRecorder alloc] initWithURL:soundFile settings:soundSetting error:nil];
[soundRecorder record];
[soundRecorder stop];  

我认为Mac OS X的代码可以像iPhone版一样简单。谢谢你的帮助。

这是代码(目前播放器不起作用)

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>

@interface MyAVFoundationClass : NSObject <AVAudioPlayerDelegate>
{
    AVAudioRecorder *soundRecorder;

}

@property (retain) AVAudioRecorder *soundRecorder;

-(IBAction)stopAudio:(id)sender;
-(IBAction)recordAudio:(id)sender;
-(IBAction)playAudio:(id)sender;

@end


#import "MyAVFoundationClass.h"

@implementation MyAVFoundationClass

@synthesize soundRecorder;

-(void)awakeFromNib
{
    NSLog(@"awakeFromNib visited");
    NSString *tempDir;
    NSURL *soundFile;
    NSDictionary *soundSetting;

    tempDir = @"/Users/broncotrojan/Documents/testvoices/";
    soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];    
    NSLog(@"soundFile: %@",soundFile);

    soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                    [NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
                    [NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
                    [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
                    [NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];

    soundRecorder = [[AVAudioRecorder alloc] initWithURL: soundFile settings: soundSetting error: nil];
}

-(IBAction)stopAudio:(id)sender
{
    NSLog(@"stopAudioVisited");
    [soundRecorder stop];
}

-(IBAction)recordAudio:(id)sender
{
    NSLog(@"recordAudio Visited");
    [soundRecorder record];

}

-(IBAction)playAudio:(id)sender
{
    NSLog(@"playAudio Visited");
    NSURL *soundFile;
    NSString *tempDir;
    AVAudioPlayer *audioPlayer;

    tempDir = @"/Users/broncotrojan/Documents/testvoices/";
    soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];  
    NSLog(@"soundFile: %@", soundFile);

    audioPlayer =  [[AVAudioPlayer alloc] initWithContentsOfURL:soundFile error:nil];

    [audioPlayer setDelegate:self];
    [audioPlayer play];

}

@end

4 个答案:

答案 0 :(得分:4)

AVFoundation框架是Lion中的新功能,与iOS版本非常相似。这包括AVAudioRecorder。您可以使用iOS中的代码进行很少或不进行修改。

文档为here

答案 1 :(得分:0)

你的代码不播放音频的原因是audioPlayer变量一旦到达方法块的末尾就会立即释放。

因此,将以下变量移到方法块的外部,然后它将很好地播放音频。

 AVAudioPlayer *audioPlayer; 

顺便说一句,您的代码段对我非常有帮助! :d

答案 2 :(得分:0)

以下是Mac的代码:

    NSDictionary *soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
                             [NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
                             [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
                             [NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];

NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSURL* audioFileURL = [NSURL fileURLWithPath: [documentsDirectory stringByAppendingString:@"/test.wav"]];

NSError* error;
AVAudioRecorder* soundRecorder = soundRecorder = [[AVAudioRecorder alloc] initWithURL: audioFileURL settings: soundSetting error: &error];

if (error)
{
    NSLog(@"Error!  soundRecorder initialization failed...");
}

// start recording
[soundRecorder record];

答案 3 :(得分:0)

以下是在Xcode 10.2.1,Swift 5.0.1。的macOS 10.14上对我有用的代码。

首先,您必须按照Apple文档Requesting Authorization for Media Capture on macOS中的描述,在NSMicrophoneUsageDescription文件中设置Privacy - Microphone Usage DescriptionInfo.plist

然后,您必须请求用户许可才能使用麦克风:

switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized: // The user has previously granted access to the camera.
  // proceed with recording 

case .notDetermined: // The user has not yet been asked for camera access.
  AVCaptureDevice.requestAccess(for: .audio) { granted in

    if granted {
      // proceed with recording
    }
  }

case .denied: // The user has previously denied access.
  ()

case .restricted: // The user can't grant access due to restrictions.
  ()

@unknown default:
  fatalError()
}

然后,您可以使用以下方法开始和停止音频记录:

import AVFoundation

open class SpeechRecorder: NSObject {
  private var destinationUrl: URL!

  var recorder: AVAudioRecorder?
  let player = AVQueuePlayer()

  open func start() {
    destinationUrl = createUniqueOutputURL()

    do {
      let format = AVAudioFormat(settings: [
        AVFormatIDKey: kAudioFormatMPEG4AAC,
        AVEncoderAudioQualityKey: AVAudioQuality.high,
        AVSampleRateKey: 44100.0,
        AVNumberOfChannelsKey: 1,
        AVLinearPCMBitDepthKey: 16,
        ])!
      let recorder = try AVAudioRecorder(url: destinationUrl, format: format)

      // workaround against Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL issue 
      // https://stackoverflow.com/a/57670740/598057
      let firstSuccess = recorder.record()
      if firstSuccess == false || recorder.isRecording == false {
        recorder.record()
      }
      assert(recorder.isRecording)

      self.recorder = recorder
    } catch let error {
      let code = (error as NSError).code
      NSLog("SpeechRecorder: \(error)")
      NSLog("SpeechRecorder: \(code)")

      let osCode = OSStatus(code)

      NSLog("SpeechRecorder: \(String(describing: osCode.detailedErrorMessage()))")
    }
  }

  open func stop() {
    NSLog("SpeechRecorder: stop()")

    if let recorder = recorder {
      recorder.stop()
      NSLog("SpeechRecorder: final file \(destinationUrl.absoluteString)")

      player.removeAllItems()
      player.insert(AVPlayerItem(url: destinationUrl), after: nil)
      player.play()
    }
  }

  func createUniqueOutputURL() -> URL {
    let paths = FileManager.default.urls(for: .musicDirectory,
                                         in: .userDomainMask)
    let documentsDirectory = URL(fileURLWithPath: NSTemporaryDirectory())

    let currentTime = Int(Date().timeIntervalSince1970 * 1000)

    let outputURL = URL(fileURLWithPath: "SpeechRecorder-\(currentTime).m4a",
      relativeTo: documentsDirectory)

    destinationUrl = outputURL

    return outputURL
  }
}

extension OSStatus {
  //**************************
  func asString() -> String? {
    let n = UInt32(bitPattern: self.littleEndian)
    guard let n1 = UnicodeScalar((n >> 24) & 255), n1.isASCII else { return nil }
    guard let n2 = UnicodeScalar((n >> 16) & 255), n2.isASCII else { return nil }
    guard let n3 = UnicodeScalar((n >>  8) & 255), n3.isASCII else { return nil }
    guard let n4 = UnicodeScalar( n        & 255), n4.isASCII else { return nil }
    return String(n1) + String(n2) + String(n3) + String(n4)
  } // asString

  //**************************
  func detailedErrorMessage() -> String {
    switch(self) {
    case 0:
      return "Success"

    // AVAudioRecorder errors
    case kAudioFileUnspecifiedError:
      return "kAudioFileUnspecifiedError"

    case kAudioFileUnsupportedFileTypeError:
      return "kAudioFileUnsupportedFileTypeError"

    case kAudioFileUnsupportedDataFormatError:
      return "kAudioFileUnsupportedDataFormatError"

    case kAudioFileUnsupportedPropertyError:
      return "kAudioFileUnsupportedPropertyError"

    case kAudioFileBadPropertySizeError:
      return "kAudioFileBadPropertySizeError"

    case kAudioFilePermissionsError:
      return "kAudioFilePermissionsError"

    case kAudioFileNotOptimizedError:
      return "kAudioFileNotOptimizedError"

    case kAudioFileInvalidChunkError:
      return "kAudioFileInvalidChunkError"

    case kAudioFileDoesNotAllow64BitDataSizeError:
      return "kAudioFileDoesNotAllow64BitDataSizeError"

    case kAudioFileInvalidPacketOffsetError:
      return "kAudioFileInvalidPacketOffsetError"

    case kAudioFileInvalidFileError:
      return "kAudioFileInvalidFileError"

    case kAudioFileOperationNotSupportedError:
      return "kAudioFileOperationNotSupportedError"

    case kAudioFileNotOpenError:
      return "kAudioFileNotOpenError"

    case kAudioFileEndOfFileError:
      return "kAudioFileEndOfFileError"

    case kAudioFilePositionError:
      return "kAudioFilePositionError"

    case kAudioFileFileNotFoundError:
      return "kAudioFileFileNotFoundError"

    //***** AUGraph errors
    case kAUGraphErr_NodeNotFound:             return "AUGraph Node Not Found"
    case kAUGraphErr_InvalidConnection:        return "AUGraph Invalid Connection"
    case kAUGraphErr_OutputNodeErr:            return "AUGraph Output Node Error"
    case kAUGraphErr_CannotDoInCurrentContext: return "AUGraph Cannot Do In Current Context"
    case kAUGraphErr_InvalidAudioUnit:         return "AUGraph Invalid Audio Unit"

    //***** MIDI errors
    case kMIDIInvalidClient:     return "MIDI Invalid Client"
    case kMIDIInvalidPort:       return "MIDI Invalid Port"
    case kMIDIWrongEndpointType: return "MIDI Wrong Endpoint Type"
    case kMIDINoConnection:      return "MIDI No Connection"
    case kMIDIUnknownEndpoint:   return "MIDI Unknown Endpoint"
    case kMIDIUnknownProperty:   return "MIDI Unknown Property"
    case kMIDIWrongPropertyType: return "MIDI Wrong Property Type"
    case kMIDINoCurrentSetup:    return "MIDI No Current Setup"
    case kMIDIMessageSendErr:    return "MIDI Message Send Error"
    case kMIDIServerStartErr:    return "MIDI Server Start Error"
    case kMIDISetupFormatErr:    return "MIDI Setup Format Error"
    case kMIDIWrongThread:       return "MIDI Wrong Thread"
    case kMIDIObjectNotFound:    return "MIDI Object Not Found"
    case kMIDIIDNotUnique:       return "MIDI ID Not Unique"
    case kMIDINotPermitted:      return "MIDI Not Permitted"

    //***** AudioToolbox errors
    case kAudioToolboxErr_CannotDoInCurrentContext: return "AudioToolbox Cannot Do In Current Context"
    case kAudioToolboxErr_EndOfTrack:               return "AudioToolbox End Of Track"
    case kAudioToolboxErr_IllegalTrackDestination:  return "AudioToolbox Illegal Track Destination"
    case kAudioToolboxErr_InvalidEventType:         return "AudioToolbox Invalid Event Type"
    case kAudioToolboxErr_InvalidPlayerState:       return "AudioToolbox Invalid Player State"
    case kAudioToolboxErr_InvalidSequenceType:      return "AudioToolbox Invalid Sequence Type"
    case kAudioToolboxErr_NoSequence:               return "AudioToolbox No Sequence"
    case kAudioToolboxErr_StartOfTrack:             return "AudioToolbox Start Of Track"
    case kAudioToolboxErr_TrackIndexError:          return "AudioToolbox Track Index Error"
    case kAudioToolboxErr_TrackNotFound:            return "AudioToolbox Track Not Found"
    case kAudioToolboxError_NoTrackDestination:     return "AudioToolbox No Track Destination"

    //***** AudioUnit errors
    case kAudioUnitErr_CannotDoInCurrentContext: return "AudioUnit Cannot Do In Current Context"
    case kAudioUnitErr_FailedInitialization:     return "AudioUnit Failed Initialization"
    case kAudioUnitErr_FileNotSpecified:         return "AudioUnit File Not Specified"
    case kAudioUnitErr_FormatNotSupported:       return "AudioUnit Format Not Supported"
    case kAudioUnitErr_IllegalInstrument:        return "AudioUnit Illegal Instrument"
    case kAudioUnitErr_Initialized:              return "AudioUnit Initialized"
    case kAudioUnitErr_InvalidElement:           return "AudioUnit Invalid Element"
    case kAudioUnitErr_InvalidFile:              return "AudioUnit Invalid File"
    case kAudioUnitErr_InvalidOfflineRender:     return "AudioUnit Invalid Offline Render"
    case kAudioUnitErr_InvalidParameter:         return "AudioUnit Invalid Parameter"
    case kAudioUnitErr_InvalidProperty:          return "AudioUnit Invalid Property"
    case kAudioUnitErr_InvalidPropertyValue:     return "AudioUnit Invalid Property Value"
    case kAudioUnitErr_InvalidScope:             return "AudioUnit InvalidScope"
    case kAudioUnitErr_InstrumentTypeNotFound:   return "AudioUnit Instrument Type Not Found"
    case kAudioUnitErr_NoConnection:             return "AudioUnit No Connection"
    case kAudioUnitErr_PropertyNotInUse:         return "AudioUnit Property Not In Use"
    case kAudioUnitErr_PropertyNotWritable:      return "AudioUnit Property Not Writable"
    case kAudioUnitErr_TooManyFramesToProcess:   return "AudioUnit Too Many Frames To Process"
    case kAudioUnitErr_Unauthorized:             return "AudioUnit Unauthorized"
    case kAudioUnitErr_Uninitialized:            return "AudioUnit Uninitialized"
    case kAudioUnitErr_UnknownFileType:          return "AudioUnit Unknown File Type"
    case kAudioUnitErr_RenderTimeout:             return "AudioUnit Rendre Timeout"

    //***** Audio errors
    case kAudio_BadFilePathError:      return "Audio Bad File Path Error"
    case kAudio_FileNotFoundError:     return "Audio File Not Found Error"
    case kAudio_FilePermissionError:   return "Audio File Permission Error"
    case kAudio_MemFullError:          return "Audio Mem Full Error"
    case kAudio_ParamError:            return "Audio Param Error"
    case kAudio_TooManyFilesOpenError: return "Audio Too Many Files Open Error"
    case kAudio_UnimplementedError:    return "Audio Unimplemented Error"

    default: return "Unknown error (no description)"
    }
  }
}

inPropertyData == NULL问题的解决方法改编自Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL

OSStatus代码提供字符串消息的代码从这里改编:How do you convert an iPhone OSStatus code to something useful?