CSCore(https://github.com/filoe/cscore)似乎是一个非常好的C#音频库,但缺乏文档和很好的例子。
我已经和Bass.Net玩了很长时间了,CSCore的架构与Bass库不同,所以很难找到正确的方法来完成一些常见的任务。
我试图从WasapiCapture设备捕获麦克风输入并将记录的数据输出到WasapiOut设备,但我无法成功。
以下是我在google搜索后可以找到的代码,但它不起作用。
MMDeviceEnumerator deviceEnum = new MMDeviceEnumerator();
MMDeviceCollection devices = deviceEnum.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active);
using (var capture = new WasapiCapture())
{
capture.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia);
capture.Initialize();
using (var source = new SoundInSource(capture))
{
using (var soundOut = new WasapiOut())
{
capture.Start();
soundOut.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Render, Role.Multimedia);
soundOut.Initialize(source);
soundOut.Play();
}
}
}
我要做的是写一个像这样的应用程序: http://www.pitchtech.ch/PitchBox/
我有自己的DSP功能,我想将其应用于记录数据。
有没有人有将WasapiCapture指向WasapiOut并编写自定义DSP的示例?
编辑:
我在CSCore图书馆创建者Florian Rosmann(filoe)的帮助下找到了解决方案。
这是一个示例DSP类,用于放大提供的音频数据。
class DSPGain: ISampleSource
{
ISampleSource _source;
public DSPGain(ISampleSource source)
{
if (source == null)
throw new ArgumentNullException("source");
_source = source;
}
public int Read(float[] buffer, int offset, int count)
{
float gainAmplification = (float)(Math.Pow(10.0, GainDB / 20.0));
int samples = _source.Read(buffer, offset, count);
for (int i = offset; i < offset + samples; i++)
{
buffer[i] = Math.Max(Math.Min(buffer[i] * gainAmplification, 1), -1);
}
return samples;
}
public float GainDB { get; set; }
public bool CanSeek
{
get { return _source.CanSeek; }
}
public WaveFormat WaveFormat
{
get { return _source.WaveFormat; }
}
public long Position
{
get
{
return _source.Position;
}
set
{
_source.Position = value;
}
}
public long Length
{
get { return _source.Length; }
}
public void Dispose()
{
}
}
您可以使用此类,如下例所示:
WasapiCapture waveIn;
WasapiOut soundOut;
DSPGain gain;
private void StartFullDuplex()
{
try
{
MMDeviceEnumerator deviceEnum = new MMDeviceEnumerator();
MMDeviceCollection devices = deviceEnum.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active);
waveIn = new WasapiCapture(false, AudioClientShareMode.Exclusive, 5);
waveIn.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia);
waveIn.Initialize();
waveIn.Start();
var source = new SoundInSource(waveIn) { FillWithZeros = true };
soundOut = new WasapiOut(false, AudioClientShareMode.Exclusive, 5);
soundOut.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Render, Role.Multimedia);
gain = new DSPGain(source.ToSampleSource());
gain.GainDB = 5;
soundOut.Initialize(gain.ToWaveSource(16));
soundOut.Play();
}
catch (Exception ex)
{
Debug.WriteLine("Exception in StartFullDuplex: " + ex.Message);
}
}
private void StopFullDuplex()
{
if (soundOut != null) soundOut.Dispose();
if (waveIn != null) waveIn.Dispose();
}