同时录制音频和播放声音 - C# - Windows Phone 8.1

时间:2015-09-06 20:31:05

标签: c# windows-phone-8.1 windows-applications

我正在尝试录制音频并直接播放(我想在耳机中听到我的声音而不保存它)但是MediaElement和MediaCapture似乎无法同时工作。 我初始化了我的MediaCapture:

    _mediaCaptureManager = new MediaCapture();
    var settings = new MediaCaptureInitializationSettings();
    settings.StreamingCaptureMode = StreamingCaptureMode.Audio;
    settings.MediaCategory = MediaCategory.Other;
    settings.AudioProcessing = AudioProcessing.Default;
    await _mediaCaptureManager.InitializeAsync(settings);

但我真的不知道如何继续;我很奇怪,如果其中一种方法可以工作(我尝试实施它们没有成功,我没有找到例子):

  1. 有没有办法使用StartPreviewAsync()录制音频,或者它只适用于视频?我注意到在设置CaptureElement Source时出现以下错误:“指定的对象或值不存在”;只有当我写“settings.StreamingCaptureMode = StreamingCaptureMode.Audio;”时才会发生这种情况。每个人都在为.Video。
  2. 工作
  3. 如何使用StartRecordToStreamAsync()记录到流中;我的意思是,我如何初始化IRandomAccessStream并从中读取?我可以在的时候上写吗?
  4. 我读到将MediaElement的AudioCathegory和MediaCapture的MediaCathegory更改为Communication,有可能它可以正常工作。但是,虽然我的代码可以使用之前的设置工作(它只需要记录并保存在文件中),但如果我写了“settings.MediaCategory = MediaCategory.Communication”,它就不起作用了。而不是“settings.MediaCategory = MediaCategory.Other;”。你能告诉我为什么吗? 这是我当前的程序,只记录,保存和播放:

    private async void CaptureAudio()
    {
        try
        {
           _recordStorageFile = await KnownFolders.VideosLibrary.CreateFileAsync(fileName, CreationCollisionOption.GenerateUniqueName);                      
          MediaEncodingProfile recordProfile = MediaEncodingProfile.CreateWav(AudioEncodingQuality.Auto);
          await _mediaCaptureManager.StartRecordToStorageFileAsync(recordProfile, this._recordStorageFile);
          _recording = true;
         }
         catch (Exception e)
         {
            Debug.WriteLine("Failed to capture audio:"+e.Message);
         }
    }
    
    private async void StopCapture()
    {
       if (_recording)
       {
          await _mediaCaptureManager.StopRecordAsync();
          _recording = false;
       }
    }
    
    private async void PlayRecordedCapture()
    {
       if (!_recording)
       {
          var stream = await   _recordStorageFile.OpenAsync(FileAccessMode.Read);
          playbackElement1.AutoPlay = true;
          playbackElement1.SetSource(stream, _recordStorageFile.FileType);
          playbackElement1.Play();
       }
    }
    
  5. 如果您有任何建议我会感激不尽。 祝你有个美好的一天。

1 个答案:

答案 0 :(得分:2)

您会考虑定位Windows 10吗?新的AudioGraph API允许您这样做,而Scenario 2 (Device Capture) in the SDK sample可以很好地证明这一点。

首先,示例将所有输出设备填充到列表中:

for each loops

然后它构建了AudioGraph:

private async Task PopulateDeviceList()
{
    outputDevicesListBox.Items.Clear();
    outputDevices = await DeviceInformation.FindAllAsync(MediaDevice.GetAudioRenderSelector());
    outputDevicesListBox.Items.Add("-- Pick output device --");
    foreach (var device in outputDevices)
    {
        outputDevicesListBox.Items.Add(device.Name);
    }
}

完成所有这些操作后,您可以录制到文件,同时播放录制的音频,如下所示:

AudioGraphSettings settings = new AudioGraphSettings(AudioRenderCategory.Media);
settings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.LowestLatency;

// Use the selected device from the outputDevicesListBox to preview the recording
settings.PrimaryRenderDevice = outputDevices[outputDevicesListBox.SelectedIndex - 1];

CreateAudioGraphResult result = await AudioGraph.CreateAsync(settings);

if (result.Status != AudioGraphCreationStatus.Success)
{
    // TODO: Cannot create graph, propagate error message
    return;
}

AudioGraph graph = result.Graph;

// Create a device output node
CreateAudioDeviceOutputNodeResult deviceOutputNodeResult = await graph.CreateDeviceOutputNodeAsync();
if (deviceOutputNodeResult.Status != AudioDeviceNodeCreationStatus.Success)
{
    // TODO: Cannot create device output node, propagate error message
    return;
}

deviceOutputNode = deviceOutputNodeResult.DeviceOutputNode;

// Create a device input node using the default audio input device
CreateAudioDeviceInputNodeResult deviceInputNodeResult = await graph.CreateDeviceInputNodeAsync(MediaCategory.Other);

if (deviceInputNodeResult.Status != AudioDeviceNodeCreationStatus.Success)
{
    // TODO: Cannot create device input node, propagate error message
    return;
}

deviceInputNode = deviceInputNodeResult.DeviceInputNode;

// Because we are using lowest latency setting, we need to handle device disconnection errors
graph.UnrecoverableErrorOccurred += Graph_UnrecoverableErrorOccurred;

// Start setting up the output file
FileSavePicker saveFilePicker = new FileSavePicker();
saveFilePicker.FileTypeChoices.Add("Pulse Code Modulation", new List<string>() { ".wav" });
saveFilePicker.FileTypeChoices.Add("Windows Media Audio", new List<string>() { ".wma" });
saveFilePicker.FileTypeChoices.Add("MPEG Audio Layer-3", new List<string>() { ".mp3" });
saveFilePicker.SuggestedFileName = "New Audio Track";
StorageFile file = await saveFilePicker.PickSaveFileAsync();

// File can be null if cancel is hit in the file picker
if (file == null)
{
    return;
}

MediaEncodingProfile fileProfile = CreateMediaEncodingProfile(file);

// Operate node at the graph format, but save file at the specified format
CreateAudioFileOutputNodeResult fileOutputNodeResult = await graph.CreateFileOutputNodeAsync(file, fileProfile);

if (fileOutputNodeResult.Status != AudioFileNodeCreationStatus.Success)
{
    // TODO: FileOutputNode creation failed, propagate error message
    return;
}

fileOutputNode = fileOutputNodeResult.FileOutputNode;

// Connect the input node to both output nodes
deviceInputNode.AddOutgoingConnection(fileOutputNode);
deviceInputNode.AddOutgoingConnection(deviceOutputNode);