等待文字转语音

时间:2018-10-09 09:32:48

标签: c# .net xamarin xamarin.forms async-await

我试图与Task一起玩,以了解其工作原理,因此在我的玩具项目中,我只想开始文字朗读,并打印时间。这是我的努力:

await Task.Factory.StartNew(
    ()    => System.Diagnostics.Debug.Print("START PLAYING {0}", 
                 System.DateTime.Now.ToString("HH:mm:ss"))).ContinueWith(

    (arg) => DependencyService.Get<ITextToSpeech>().Speak(s)).ContinueWith(

    (arg) => System.Diagnostics.Debug.Print("STOP  PLAYING {0}", 
                 System.DateTime.Now.ToString("HH:mm:ss"))
);

代码位于async void Play_Clicked(object sender, System.EventArgs e)事件处理程序中,但是正如我所看到的,它不会等待TTS完成并立即打印时间:

START PLAYING 11:22:44
START IMPLEMENTATION 11:22:44
STOP  IMPLEMENTATION 11:22:45
STOP  PLAYING 11:22:45

依赖关系的实现只是从Xamarin关于TTS的教程中复制/粘贴的:

using Xamarin.Forms;
using AVFoundation;

[assembly: Dependency(typeof(Testers.iOS.TextToSpeechImplementation))]
namespace Testers.iOS
{
    public class TextToSpeechImplementation : ITextToSpeech
    {
        public TextToSpeechImplementation() { }

        public void Speak(string text)
        {
            System.Diagnostics.Debug.Print("START IMPLEMENTATION {0}", System.DateTime.Now.ToString("HH:mm:ss"));

            var speechSynthesizer = new AVSpeechSynthesizer();
            var speechUtterance = new AVSpeechUtterance(text)
            {
                Rate = AVSpeechUtterance.MaximumSpeechRate / 2.8f,
                Voice = AVSpeechSynthesisVoice.FromLanguage(App.current_lang),
                PreUtteranceDelay = 0.5f,
                PostUtteranceDelay = 0.0f,
                Volume = 0.5f,
                PitchMultiplier = 1.0f
            };

            speechSynthesizer.SpeakUtterance(speechUtterance);
            System.Diagnostics.Debug.Print("STOP  IMPLEMENTATION {0}", System.DateTime.Now.ToString("HH:mm:ss"));
        }
    }
}

,其接口定义为

using System;

namespace Testers
{
    public interface ITextToSpeech
    {
        void Speak(string text);
    }
}

我仍在掌握整个async / await概念,因此我显然在这里缺少了一些重要的东西。

任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:1)

您将TaskCompletionSourceDidFinishSpeechUtterance处理程序一起使用,以确定语音输出何时结束。

注意:DidFinishSpeechUtterance处理程序正在自动分配AVSpeechSynthesizerDelegate,因此您可以跳过Xamarin处理程序包装,直接创建/使用您自己的委托(某些用例需要)< / p>

用法示例:

await speechSynthesizer.SpeakUtteranceAsync(speechUtterance, cancelToken);

示例扩展名:

public static class AClassyClass
{
    public static async Task SpeakUtteranceAsync(this AVSpeechSynthesizer synthesizer, AVSpeechUtterance speechUtterance, CancellationToken cancelToken)
    {
        var tcsUtterance = new TaskCompletionSource<bool>();
        try
        {
            synthesizer.DidFinishSpeechUtterance += OnFinishedSpeechUtterance;
            synthesizer.SpeakUtterance(speechUtterance);
            using (cancelToken.Register(TryCancel))
            {
                await tcsUtterance.Task;
            }
        }
        finally
        {
            synthesizer.DidFinishSpeechUtterance -= OnFinishedSpeechUtterance;
        }

        void TryCancel()
        {
            synthesizer?.StopSpeaking(AVSpeechBoundary.Word);
            tcsUtterance?.TrySetResult(true);
        }

        void OnFinishedSpeechUtterance(object sender, AVSpeechSynthesizerUteranceEventArgs args)
        {
            if (speechUtterance == args.Utterance)
                tcsUtterance?.TrySetResult(true);
        }
    }
}

注意Xamarin.Essentials包括使用TaskCompletionSource的此流程,并提供了提供相同功能的TextToSpeech.SpeakAsync

回复:https://docs.microsoft.com/en-us/xamarin/essentials/text-to-speech