我正在尝试从.wav
文件中的麦克风录制音频数据并播放它。我还需要用于绘制图形的实际数据(幅度),所以我使用AudioUnit
。我为AudioUnit
对象设置了inputCallBack和renderCallBack但是我不知道如何从.wav
mehtod向render_CallBack
文件写入AudioBuffers。我附上了我迄今为止尝试过的代码。
请帮帮我......
步骤1
AudioStreamBasicDescription audioStreamBasicDesc;
AudioUnit.AudioUnit audioUnit;
string m_recordingFilePath;
ExtAudioFile extAudioFileObj;
public override void ViewDidLoad()
{
base.ViewDidLoad();
audioStreamBasicDesc.SampleRate = 16000;
audioStreamBasicDesc.Format = AudioFormatType.LinearPCM;
audioStreamBasicDesc.FramesPerPacket = 1;
audioStreamBasicDesc.ChannelsPerFrame = 1;
audioStreamBasicDesc.BytesPerFrame =
audioStreamBasicDesc.ChannelsPerFrame * sizeof(short);
audioStreamBasicDesc.BytesPerPacket =
audioStreamBasicDesc.ChannelsPerFrame * sizeof(short);
audioStreamBasicDesc.BitsPerChannel = 16;
audioStreamBasicDesc.Reserved = 0;
audioStreamBasicDesc.FormatFlags = AudioFormatFlags.IsSignedInteger |
AudioFormatFlags.IsPacked;
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
var tmp = Path.Combine(documents, "..", "tmp");
m_recordingFilePath = Path.Combine(tmp,
String.Format("{0}.wav",
"MyFile" + DateTime.Now.ToString("MM-dd-yyyy HH-mm-ss",
CultureInfo.InvariantCulture)));
extAudioFileObj = ExtAudioFile.CreateWithUrl(CFUrl.FromFile(m_recordingFilePath),
AudioFileType.WAVE,
audioStreamBasicDesc,
AudioFileFlags.EraseFlags);
prepareAudioUnit();
}
步骤2
public void prepareAudioUnit()
{
var _audioComponent = AudioComponent.FindComponent(AudioTypeOutput.Remote);
audioUnit = _audioComponent.CreateAudioUnit();
audioUnit = new AudioUnit.AudioUnit(_audioComponent);
audioUnit.SetEnableIO(true,
AudioUnitScopeType.Input,
1 // Remote Input
);
// setting audio format
audioUnit.SetAudioFormat(audioStreamBasicDesc,
AudioUnitScopeType.Output,
1
);
audioUnit.SetInputCallback(input_CallBack, AudioUnitScopeType.Input, 1);
audioUnit.SetRenderCallback(render_CallBack, AudioUnitScopeType.Global, 0);
audioUnit.Initialize();
audioUnit.Start();
}
步骤3
AudioUnitStatus input_CallBack(AudioUnitRenderActionFlags actionFlags,
AudioTimeStamp timeStamp,
uint busNumber,
uint numberFrames,
AudioUnit.AudioUnit audioUnit)
{
return AudioUnitStatus.NoError;
}
步骤4
AudioUnitStatus render_CallBack(AudioUnitRenderActionFlags actionFlags,
AudioTimeStamp timeStamp,
uint busNumber,
uint numberFrames,
AudioBuffers data)
{
// getting microphone input signal
var status = audioUnit.Render(ref actionFlags,
timeStamp,
1, // Remote input
numberFrames,
data);
if (status != AudioUnitStatus.OK)
{
return status;
}
//get pointer to buffer
var outP = data[0].Data;
unsafe
{
var outPtr = (int*)outP.ToPointer();
for (int i = 0; i < numberFrames; i++)
{
var val = *outPtr;
outPtr++;
//lastestPickVal = val; //This is for ploting graph
Console.WriteLine(val);
}
}
extAudioFileObj.ClientDataFormat = audioStreamBasicDesc;
//Here i am trying to write data into .wav file and file is generated also
//but corrupted file without actual data (create file size is approx 4kb or 100 kb )
var err = extAudioFileObj.Write(numberFrames, data);
Console.WriteLine("OUTPUT" + busNumber);
return AudioUnitStatus.NoError;
}
答案 0 :(得分:1)
前段时间我为Xamarin编写了一个IAudioStream抽象,它可能对你没什么帮助。它从AudioQueueBuffer获取字节缓冲区,你可能正在寻找的是将缓冲区编组为字节:
还有一个WAV录音机类挂钩到源并将其写入WAV但是在原始信号被提取之后:
我希望这些会给你至少一些帮助。