如何访问音频文件中的示例

时间:2011-12-06 13:33:02

标签: ios audio sampling

我正在制作一款iPhone应用程序,让用户可以设计一个音频过滤器并对某些录制的声音进行测试。我尝试执行以下操作:

  1. 我创建了两个名为“recordeAudio.aiff”和“filteredAudio.aiff”的音频文件
  2. 我用麦克风录制声音并将其保存在“recordedAudio.aiff”
  3. 我将音频数据从“recordedAudio.aiff”复制到缓冲区
  4. 稍后,我将对此时缓冲区中的数据执行一些音频过滤,但出于测试的目的,我只想将每个样本的值减少一半(这样可以将音量减少一半)所以我确信我能够操作单个样本
  5. 我将结果写入第二个缓冲区
  6. 我将该缓冲区的数据写入第二个文件“filteredAudio.aiff”
  7. 我播放第二个文件
  8. 问题如下:只要我将数据从一个缓冲区复制到另一个缓冲区然后将其写入第二个音频文件,一切正常。但是一旦我尝试对样本执行任何类型的操作(比如将它们除以2),结果就是随机噪声。这让我怀疑我没有正确地解释音频数据的值,但我现在已经尝试了五天,而我却没有得到它。如果你知道如何访问和操作单个音频样本,请帮助我,我会非常感激! 谢谢!

    这是稍后将执行过滤的代码(现在它应该将所有音频样本除以2);

    OSStatus status = noErr;
    UInt32 propertySizeDataPacketCount;
    UInt32 writabilityDataPacketCount;
    UInt32 numberOfPackets;
    UInt32 propertySizeMaxPacketSize;
    UInt32 writabilityMaxPacketSize;
    UInt32 maxPacketSize;
    UInt32 numberOfBytesRead;
    UInt32 numberOfBytesToWrite;
    UInt32 propertySizeDataByteCount;
    SInt64 currentPacket;
    double x0;
    double x1;
    
    
    status = AudioFileOpenURL(audioFiles->recordedFile, 
                              kAudioFileReadPermission, 
                              kAudioFileAIFFType, 
                              &audioFiles->inputFile);
    status = AudioFileOpenURL(audioFiles->filteredFile, 
                              kAudioFileReadWritePermission, 
                              kAudioFileAIFFType, 
                              &audioFiles->outputFile);
    
    status = AudioFileGetPropertyInfo(audioFiles->inputFile, 
                                      kAudioFilePropertyAudioDataPacketCount, 
                                      &propertySizeDataPacketCount, 
                                      &writabilityDataPacketCount);
    
    status = AudioFileGetProperty(audioFiles->inputFile, 
                                  kAudioFilePropertyAudioDataPacketCount, 
                                  &propertySizeDataPacketCount, 
                                  &numberOfPackets);
    
    status = AudioFileGetPropertyInfo (audioFiles->inputFile, 
                                       kAudioFilePropertyMaximumPacketSize, 
                                       &propertySizeMaxPacketSize, 
                                       &writabilityMaxPacketSize);
    
    status = AudioFileGetProperty(audioFiles->inputFile, 
                                  kAudioFilePropertyMaximumPacketSize, 
                                  &propertySizeMaxPacketSize, 
                                  &maxPacketSize);
    
    
    SInt16 *inputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);
    SInt16 *outputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);
    
    
    currentPacket = 0;
    status = AudioFileReadPackets(audioFiles->inputFile, 
                                  false, &numberOfBytesRead, 
                                  NULL, 
                                  currentPacket, 
                                  &numberOfPackets, 
                                  inputBuffer);
    
    
    for (int i = 0; i < numberOfPackets; i++) {
    
        x0 = (double)inputBuffer[i];
        x1 = 0.5 * x0; //This is supposed to reduce the value of the sample by half
        //x1 = x0;     //This just copies the value of the sample and works fine
        outputBuffer[i] = (SInt16)x1;
    }
    
    
    
    numberOfBytesToWrite = numberOfBytesRead;
    currentPacket = 0;
    status = AudioFileWritePackets(audioFiles->outputFile, 
                                   false, 
                                   numberOfBytesToWrite, 
                                   NULL, 
                                   currentPacket, 
                                   &numberOfPackets, 
                                   outputBuffer);
    
    status = AudioFileClose(audioFiles->inputFile);
    status = AudioFileClose(audioFiles->outputFile);
    

    为了创建音频文件,我使用以下代码:

     #import "AudioFiles.h"
    
     #define SAMPLE_RATE         44100
    
     #define FRAMES_PER_PACKET   1
     #define CHANNELS_PER_FRAME  1
     #define BYTES_PER_FRAME     2
     #define BYTES_PER_PACKET    2
     #define BITS_PER_CHANNEL    16
    
     @implementation AudioFiles
    
     -(void)setupAudioFormat:(AudioStreamBasicDescription *)format {
    format->mSampleRate = SAMPLE_RATE;
    format->mFormatID = kAudioFormatLinearPCM;
    format->mFramesPerPacket = FRAMES_PER_PACKET;
    format->mChannelsPerFrame = CHANNELS_PER_FRAME;
    format->mBytesPerFrame = BYTES_PER_FRAME;
    format->mBytesPerPacket = BYTES_PER_PACKET;
    format->mBitsPerChannel = BITS_PER_CHANNEL;
    format->mReserved = 0;
    format->mFormatFlags = kLinearPCMFormatFlagIsBigEndian |
        kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
     }
    
    
     - (id)init
     {
      self = [super init];
      if (self) {
    
        char path[256];
        NSArray *dirPaths;
        NSString *docsDir;
    
        dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        docsDir = [dirPaths objectAtIndex:0];
    
        NSString *recordedFilePath = [docsDir    stringByAppendingPathComponent:@"/recordedAudio.aiff"];
        [recordedFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];
        recordedFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);
        recordedFileURL = [NSURL fileURLWithPath:recordedFilePath];
    
        dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        docsDir = [dirPaths objectAtIndex:0];
    
        NSString *filteredFilePath = [docsDir stringByAppendingPathComponent:@"/filteredAudio.aiff"];
        [filteredFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];
        filteredFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);
        filteredFileURL = [NSURL fileURLWithPath:filteredFilePath];
    
        AudioStreamBasicDescription audioFileFormat;
        [self setupAudioFormat:&audioFileFormat];
    
        OSStatus status = noErr;
        status = AudioFileCreateWithURL(recordedFile, 
                                        kAudioFileAIFFType, 
                                        &audioFileFormat, 
                                        kAudioFileFlags_EraseFile, 
                                        &inputFile);
        status = AudioFileCreateWithURL(filteredFile, 
                                        kAudioFileAIFFType, 
                                        &audioFileFormat, 
                                        kAudioFileFlags_EraseFile, 
                                        &outputFile);
    
    }
    
    return self;
    }
    @end
    

    录制时我使用AVAudioRecorder并进行以下设置:

     NSDictionary *recordSettings =
     [[NSDictionary alloc] initWithObjectsAndKeys:
     [NSNumber numberWithFloat: 8000.0], AVSampleRateKey,
     [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey,
     [NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
     [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,
     [NSNumber numberWithInt:16], AVEncoderBitRateKey,
     [NSNumber numberWithBool:YES],AVLinearPCMIsBigEndianKey,
     [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
     [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
     [NSNumber numberWithBool:YES], AVLinearPCMIsNonInterleaved,
     nil];
    
    NSError *error = nil;
    
    audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFiles->recordedFileURL settings:recordSettings error:&error];
    
    if (error)
    {
        NSLog(@"error: %@", [error localizedDescription]);
    
    } else {
        [audioRecorder prepareToRecord];
    }
    

1 个答案:

答案 0 :(得分:4)

您的输入数据是BigEndian,但您假设它是LittleEndian。

处理此问题的一种方法是:

SInt16 inVal = OSSwapBigToHostInt16(inputBuffer[i]);
SInt16 outVal = inVal / 2;
outputBuffer[i] = OSSwapHostToBigInt16(outVal);