在objective-c中生成莫尔斯代码风格的音调

时间:2011-09-04 09:35:45

标签: objective-c

我有一个课程可以让我使用音频单元播放音调,我希望能够做的是当我向班级发送一个短语或字母时,该类播放莫尔斯代码风格。

我该如何解决这个问题?我希望有人能指出我正确的方向。我已将音频发生器.h和.m文件包括在下面

    //
//  Singer.h
//  musiculesdev
//
//  Created by Dylan on 2/20/09.
//  Copyright 2009 __MyCompanyName__. All rights reserved.
//

#import <Foundation/Foundation.h>
#import <AudioUnit/AudioUnit.h>



@interface Singer : NSObject {


    AudioComponentInstance audioUnit;

}


-(void)initAudio; // put this in init?


-(void)start;
-(void)stop;
-(IBAction)turnOnSound:(id)sender;

@end


//
//  Singer.m
//  musiculesdev
//
//  Created by Dylan on 2/20/09.
//  Copyright 2009 __MyCompanyName__. All rights reserved.
//

#import <AudioUnit/AudioUnit.h>
#import <math.h>

#import "Singer.h"

#define kOutputBus 0
#define kSampleRate 44100
//44100.0f
#define kWaveform (M_PI * 2.0f / kSampleRate)


@implementation Singer


OSStatus playbackCallback(void *inRefCon,
                          AudioUnitRenderActionFlags *ioActionFlags,
                          const AudioTimeStamp *inTimeStamp,
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames,
                          AudioBufferList *ioData) {    

    //Singer *me = (Singer *)inRefCon;

    static int phase = 0;

    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++) {

        int samples = ioData->mBuffers[i].mDataByteSize / sizeof(SInt16);

        SInt16 values[samples];

        float waves;

        for(int j = 0; j < samples; j++) {


            waves = 0;


            waves += sin(kWaveform * 261.63f * phase);
            waves += sin(kWaveform * 120.0f * phase);
            waves += sin(kWaveform * 1760.3f * phase);
            waves += sin(kWaveform * 880.0f * phase);            

            waves *= 32500 / 4; // <--------- make sure to divide by how many waves you're stacking

            values[j] = (SInt16)waves;
            values[j] += values[j]<<16;

            phase++;

        }

        memcpy(ioData->mBuffers[i].mData, values, samples * sizeof(SInt16));

    }


    return noErr;

}

-(IBAction)turnOnSound:(id)sender {
    Singer *singer = [[Singer alloc] init];

    [singer start];
}


-(id)init {
    NSLog(@"In the singer init!!");
    if(self = [super init]) {

        [self initAudio];

    }

    return self;

}

-(void)initAudio {

    OSStatus status;

    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    AudioComponent outputComponent = AudioComponentFindNext(NULL, &desc);

    status = AudioComponentInstanceNew(outputComponent, &audioUnit);

    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));

    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = kSampleRate;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormat, sizeof(audioFormat));

    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;

    status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct));

    status = AudioUnitInitialize(audioUnit);

}

-(void)start {

    OSStatus status;

    status = AudioOutputUnitStart(audioUnit);

}

-(void)stop {

    OSStatus status;

    status = AudioOutputUnitStop(audioUnit);

}

-(void)dealloc {

    AudioUnitUninitialize(audioUnit);

    [super dealloc];

}

@end

1 个答案:

答案 0 :(得分:2)

您需要能够生成特定持续时间的音调,并以特定持续时间的静音分隔。只要您拥有这两个构建块,就可以发送莫尔斯代码:

dot = 1 unit
dash = 3 units
space between dots/dashes within a letter = 1 unit
space between letters = 3 units
space between words = 5 units

unit的长度决定了莫尔斯电码的整体速度。从例如开始50毫秒。

音调应该是适当频率的纯正弦波,例如: 400赫兹。静音可以只是一个包含全零的备用缓冲区。这样你就可以使用相同的API“播放”音调和静音,而不必担心时间/同步等。