将声音(wav)文件从目标c传递给javascript

时间:2013-03-28 19:21:57

标签: javascript objective-c audio ios6

我正在录制目标C中的声音文件(wav格式)。我想使用Objective C stringByEvaluatingJavaScriptFromString 将其传回Javascript。我想我必须将wav文件转换为base64字符串以将其传递给此函数。然后我将在javascript中将base64字符串转换回(wav / blob)格式,将其传递给音频标签进行播放。我不知道怎么办?还不确定这是否是将wave文件传递回javascript的最佳方法?任何想法将不胜感激。

2 个答案:

答案 0 :(得分:2)

嗯,这不像我预期的那样直截了当。所以这就是我如何实现这一目标的。

步骤1:我使用AudioRecorder以caf格式录制音频。

NSArray *dirPaths;
NSString *docsDir;

dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

docsDir = [dirPaths objectAtIndex:0];

soundFilePath = [docsDir stringByAppendingPathComponent:@"sound.caf"];

NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];

NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithInt:AVAudioQualityMin],
    AVEncoderAudioQualityKey,
    [NSNumber numberWithInt:16],
    AVEncoderBitRateKey,
    [NSNumber numberWithInt:2],
    AVNumberOfChannelsKey,
    [NSNumber numberWithFloat:44100],
                                AVSampleRateKey,
    nil];

NSError *error = nil;

audioRecorder = [[AVAudioRecorder alloc]
                 initWithURL:soundFileURL
                 settings:recordSettings error:&error];

if(error)
{
    NSLog(@"error: %@", [error localizedDescription]);
} else {
    [audioRecorder prepareToRecord];
}

在此之后,您只需要调用audioRecorder.record来录制音频。它会被记录下来 在caf格式。如果你想看我的recordAudio函数,那么它就是。

  (void) recordAudio
   {
    if(!audioRecorder.recording)
     {
         _playButton.enabled = NO;
         _recordButton.title = @"Stop";
         [audioRecorder record];
         [self animate1:nil finished:nil context:nil];

     }
    else
    {
       [_recordingImage stopAnimating];
       [audioRecorder stop];
       _playButton.enabled = YES;
      _recordButton.title = @"Record";
    }
  }

第2步:将caf格式转换为wav格式。我可以使用以下功能执行此操作。

 -(BOOL)exportAssetAsWaveFormat:(NSString*)filePath
{
   NSError *error = nil ;

NSDictionary *audioSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                              [ NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                              [ NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                              [ NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                              [ NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                              [ NSNumber numberWithBool:0], AVLinearPCMIsBigEndianKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                              [ NSData data], AVChannelLayoutKey, nil ];

NSString *audioFilePath = filePath;
AVURLAsset * URLAsset = [[AVURLAsset alloc]  initWithURL:[NSURL fileURLWithPath:audioFilePath] options:nil];

if (!URLAsset) return NO ;

AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:URLAsset error:&error];
if (error) return NO;

NSArray *tracks = [URLAsset tracksWithMediaType:AVMediaTypeAudio];
if (![tracks count]) return NO;

AVAssetReaderAudioMixOutput *audioMixOutput = [AVAssetReaderAudioMixOutput
                                               assetReaderAudioMixOutputWithAudioTracks:tracks
                                               audioSettings :audioSetting];

if (![assetReader canAddOutput:audioMixOutput]) return NO ;

[assetReader addOutput :audioMixOutput];

if (![assetReader startReading]) return NO;



NSString *title = @"WavConverted";
NSArray *docDirs = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docDir = [docDirs objectAtIndex: 0];
NSString *outPath = [[docDir stringByAppendingPathComponent :title]
                     stringByAppendingPathExtension:@"wav" ];

if(![[NSFileManager defaultManager] removeItemAtPath:outPath error:NULL])
{
    return NO;
}

soundFilePath = outPath;

NSURL *outURL = [NSURL fileURLWithPath:outPath];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outURL
                                                      fileType:AVFileTypeWAVE
                                                         error:&error];
if (error) return NO;

AVAssetWriterInput *assetWriterInput = [ AVAssetWriterInput assetWriterInputWithMediaType :AVMediaTypeAudio
                                                                            outputSettings:audioSetting];
assetWriterInput. expectsMediaDataInRealTime = NO;

if (![assetWriter canAddInput:assetWriterInput]) return NO ;

[assetWriter addInput :assetWriterInput];

if (![assetWriter startWriting]) return NO;


//[assetReader retain];
//[assetWriter retain];

[assetWriter startSessionAtSourceTime:kCMTimeZero ];

dispatch_queue_t queue = dispatch_queue_create( "assetWriterQueue", NULL );

[assetWriterInput requestMediaDataWhenReadyOnQueue:queue usingBlock:^{

    NSLog(@"start");

    while (1)
    {
        if ([assetWriterInput isReadyForMoreMediaData] && (assetReader.status == AVAssetReaderStatusReading)) {

            CMSampleBufferRef sampleBuffer = [audioMixOutput copyNextSampleBuffer];

            if (sampleBuffer) {
                [assetWriterInput appendSampleBuffer :sampleBuffer];
                CFRelease(sampleBuffer);
            } else {
                [assetWriterInput markAsFinished];
                break;
            }
        }
    }

    [assetWriter finishWriting];

    //[self playWavFile];
    NSError *err;
    NSData *audioData = [NSData dataWithContentsOfFile:soundFilePath options: 0 error:&err];
    [self.audioDelegate doneRecording:audioData];
    //[assetReader release ];
    //[assetWriter release ];
    NSLog(@"soundFilePath=%@",soundFilePath);
    NSDictionary *dict = [[NSFileManager defaultManager] attributesOfItemAtPath:soundFilePath error:&err];
    NSLog(@"size of wav file = %@",[dict objectForKey:NSFileSize]);
    //NSLog(@"finish");
}];

在这个函数中,我正在使用audioData调用audioDelegate函数doneRecording 以wav格式。这是doneRecording的代码。

-(void) doneRecording:(NSData *)contents
{
myContents = [[NSData dataWithData:contents] retain];
[self returnResult:alertCallbackId args:@"Recording Done.",nil];
}

// Call this function when you have results to send back to javascript callbacks
 // callbackId : int comes from handleCall function

// args: list of objects to send to the javascript callback
- (void)returnResult:(int)callbackId args:(id)arg, ...;
{
  if (callbackId==0) return;

  va_list argsList;
  NSMutableArray *resultArray = [[NSMutableArray alloc] init];

  if(arg != nil){
    [resultArray addObject:arg];
    va_start(argsList, arg);
    while((arg = va_arg(argsList, id)) != nil)
      [resultArray addObject:arg];
    va_end(argsList);
  }

   NSString *resultArrayString = [json stringWithObject:resultArray allowScalar:YES error:nil];
   [self performSelectorOnMainThread:@selector(stringByEvaluatingJavaScriptFromString:) withObject:[NSString stringWithFormat:@"NativeBridge.resultForCallback(%d,%@);",callbackId,resultArrayString] waitUntilDone:NO];
   [resultArray release];    
}

步骤3:现在是时候回到我们完成录制的UIWebView内部的javascript 音频,所以你可以开始接收我们的块数据。我正在使用websockets 将数据传回javascript。数据将以块的形式传输 因为我正在使用的服务器(https://github.com/benlodotcom/BLWebSocketsServer)是使用的 libwebsockets(http://git.warmcat.com/cgi-bin/cgit/libwebsockets/)。

这是在委托类中启动服务器的方式。

- (id)initWithFrame:(CGRect)frame 
{
  if (self = [super initWithFrame:frame]) {

      [self _createServer];
      [self.server start];
      myContents = [NSData data];

    // Set delegate in order to "shouldStartLoadWithRequest" to be called
    self.delegate = self;

    // Set non-opaque in order to make "body{background-color:transparent}" working!
    self.opaque = NO;

    // Instanciate JSON parser library
    json = [ SBJSON new ];

    // load our html file
    NSString *path = [[NSBundle mainBundle] pathForResource:@"webview-document" ofType:@"html"];
    [self loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:path]]];



  }
  return self;
}
-(void) _createServer
{
    /*Create a simple echo server*/
    self.server = [[BLWebSocketsServer alloc] initWithPort:9000 andProtocolName:echoProtocol];
    [self.server setHandleRequestBlock:^NSData *(NSData *data) {

        NSString *convertedString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
        NSLog(@"Received Request...%@",convertedString);

        if([convertedString isEqualToString:@"start"])
        {
            NSLog(@"myContents size: %d",[myContents length]);

            int contentSize = [myContents length];
            int chunkSize = 64*1023;
            chunksCount = ([myContents length]/(64*1023))+1;

            NSLog(@"ChunkSize=%d",chunkSize);
            NSLog(@"chunksCount=%d",chunksCount);

            chunksArray =  [[NSMutableArray array] retain];

            int index = 0;
            //NSRange chunkRange;

            for(int i=1;i<=chunksCount;i++)
            {

                if(i==chunksCount)
                {
                    NSRange chunkRange = {index,contentSize-index};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,contentSize-index);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    [chunksArray addObject:dataChunk];
                    break;
                }
                else
                {
                    NSRange chunkRange = {index, chunkSize};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,chunkSize);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    index += chunkSize;
                    [chunksArray addObject:dataChunk];
                }
            }

            return [chunksArray objectAtIndex:0];

        }
        else
        {
            int chunkNumber = [convertedString intValue];

            if(chunkNumber>0 && (chunkNumber+1)<=chunksCount)
            {
                return [chunksArray objectAtIndex:(chunkNumber)];
            }


        }

        NSLog(@"Releasing Array");
        [chunksArray release];
        chunksCount = 0;
        return [NSData dataWithBase64EncodedString:@"Stop"];
    }];
}

javascript端的代码是

var socket;
var chunkCount = 0;
var soundBlob, soundUrl;
var smallBlobs = new Array();

function captureMovieCallback(response)
{
    if(socket)
    {
        try{
            socket.send('start');
        }
        catch(e)
        {
            log('Socket is not valid object');
        }

    }
    else
    {
        log('socket is null');
    }
}

function closeSocket(response)
{
    socket.close();
}


function connect(){
    try{
        window.WebSocket = window.WebSocket || window.MozWebSocket;

        socket = new WebSocket('ws://127.0.0.1:9000',
                                      'echo-protocol');

        socket.onopen = function(){
        }

        socket.onmessage = function(e){
            var data = e.data;
            if(e.data instanceof ArrayBuffer)
            {
                log('its arrayBuffer');
            }
            else if(e.data instanceof Blob)
            {
                if(soundBlob)
                   log('its Blob of size = '+ e.data.size + ' final blob size:'+ soundBlob.size);

                if(e.data.size != 3)
                {
                    //log('its Blob of size = '+ e.data.size);
                    smallBlobs[chunkCount]= e.data;
                    chunkCount = chunkCount +1;
                    socket.send(''+chunkCount);
                }
                else
                {
                    //alert('End Received');
                    try{
                    soundBlob = new Blob(smallBlobs,{ "type" : "audio/wav" });
                    var myURL = window.URL || window.webkitURL;
                    soundUrl = myURL.createObjectURL(soundBlob);
                    log('soundURL='+soundUrl);
                    }
                    catch(e)
                    {
                        log('Problem creating blob and url.');
                    }

                    try{
                        var serverUrl = 'http://10.44.45.74:8080/MyTestProject/WebRecording?record';
                        var xhr = new XMLHttpRequest();
                        xhr.open('POST',serverUrl,true);
                        xhr.setRequestHeader("content-type","multipart/form-data");
                        xhr.send(soundBlob);
                    }
                    catch(e)
                    {
                        log('error uploading blob file');
                    }

                    socket.close();
                }

                //alert(JSON.stringify(msg, null, 4));
            }
            else
            {
                log('dont know');
            }
        }

        socket.onclose = function(){
            //message('<p class="event">Socket Status: '+socket.readyState+' (Closed)');
            log('final blob size:'+soundBlob.size);
        }

    } catch(exception){
       log('<p>Error: '+exception);
    }
}

function log(msg) {
    NativeBridge.log(msg);
}
function stopCapture() {
    NativeBridge.call("stopMovie", null,null);
}

function startCapture() {
    NativeBridge.call("captureMovie",null,captureMovieCallback);
}

NativeBridge.js

var NativeBridge = {
  callbacksCount : 1,
  callbacks : {},

  // Automatically called by native layer when a result is available
  resultForCallback : function resultForCallback(callbackId, resultArray) {
    try {


    var callback = NativeBridge.callbacks[callbackId];
    if (!callback) return;
    console.log("calling callback for "+callbackId);
    callback.apply(null,resultArray);
    } catch(e) {alert(e)}
  },

  // Use this in javascript to request native objective-c code
  // functionName : string (I think the name is explicit :p)
  // args : array of arguments
  // callback : function with n-arguments that is going to be called when the native code returned
  call : function call(functionName, args, callback) {

    //alert("call");
    //alert('callback='+callback);
    var hasCallback = callback && typeof callback == "function";
    var callbackId = hasCallback ? NativeBridge.callbacksCount++ : 0;

    if (hasCallback)
      NativeBridge.callbacks[callbackId] = callback;

    var iframe = document.createElement("IFRAME");
    iframe.setAttribute("src", "js-frame:" + functionName + ":" + callbackId+ ":" + encodeURIComponent(JSON.stringify(args)));
    document.documentElement.appendChild(iframe);
    iframe.parentNode.removeChild(iframe);
    iframe = null;

  },

    log : function log(message) {

        var iframe = document.createElement("IFRAME");
        iframe.setAttribute("src", "ios-log:"+encodeURIComponent(JSON.stringify("#iOS#" + message)));
        document.documentElement.appendChild(iframe);
        iframe.parentNode.removeChild(iframe);
        iframe = null;

    }

};
  1. 我们在html端的body load上调用javascript端的connect()

  2. 一旦我们从startCapture函数收到回调(captureMovieCallback),我们发送 开始消息,表明我们已准备好接受数据。

  3. 目标c侧的
  4. 服务器将wav音频数据拆分为chunksize = 60 * 1023的小块 并存储在数组中。

  5. 将第一个块发送回javascript端。

  6. javascript接受此块并从服务器发送它需要的下一个块的数量。

  7. 服务器发送此号码指示的块。这个过程重复进行,直到我们 将最后一个块发送到javascript。

  8. 最后我们将停止消息发送回javascript端,表示我们已完成。它 显然是3个字节(用作打破这个循环的标准。)

  9. 每个块都以数组形式存储为小blob。现在我们从这些创建更大的blob 使用以下行的小blob

    soundBlob = new Blob(smallBlobs,{“type”:“audio / wav”});

    将此Blob上传到服务器,该服务器将此blob写为wav文件。 我们可以将url作为音频标签的src传递给这个wav文件,以便在javascript端重放它。

  10. 我们在将blob发送到服务器后关闭websocket连接。

    希望这很清楚,可以理解。

答案 1 :(得分:0)

如果您只想播放声音,那么使用iOS中的原生音频播放系统而不是HTML音频标签要好得多。