ASP.NET Web Api调用非Task异步方法

时间:2015-03-31 10:37:46

标签: c# asp.net asynchronous asp.net-web-api async-await

在我的Web Api项目中,我有一个[HttpPost]方法 - public HttpResponseMessage saveFiles() {} 这会将一些音频文件保存到服务器。 保存文件后,我需要在Microsoft.Speech服务器api中调用一个方法,此方法是异步但它返回void:

public void RecognizeAsync(RecognizeMode mode);

我想等到这个方法结束,然后用我收集的所有信息向客户端返回一个答案。 我不能在这里使用await因为此函数返回void。 我实施了一项活动:public event RecognitionFinishedHandler RecognitionFinished;

此功能完成后会调用此事件。

- 编辑 我用一个Task包装这个事件,但我想我做错了,因为我无法让RecognizeAsync函数真正完成它的工作。看来这个功能现在不起作用,这是我的代码:

包含语音识别的功能:

public delegate void RecognitionFinishedHandler(object sender);
public class SpeechActions
{
    public event RecognitionFinishedHandler RecognitionFinished;
    private SpeechRecognitionEngine sre;
    public Dictionary<string, List<TimeSpan>> timeTags; // contains the times of each tag: "tag": [00:00, 00:23 .. ]

    public SpeechActions()
    {
        sre = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US"));
        sre.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(sre_SpeechRecognized);
        sre.AudioStateChanged += new EventHandler<AudioStateChangedEventArgs>(sre_AudioStateChanged);
    }

    /// <summary>
    /// Calculates the tags appearances in a voice over wav file.
    /// </summary>
    /// <param name="path">The path to the voice over wav file.</param>
    public void CalcTagsAppearancesInVO(string path, string[] tags, TimeSpan voLength)
    {
        timeTags = new Dictionary<string, List<TimeSpan>>();
        sre.SetInputToWaveFile(path);

        foreach (string tag in tags)
        {
            GrammarBuilder gb = new GrammarBuilder(tag);
            gb.Culture = new System.Globalization.CultureInfo("en-US");
            Grammar g = new Grammar(gb);
            sre.LoadGrammar(g);
        }

        sre.RecognizeAsync(RecognizeMode.Multiple);
    }

    void sre_AudioStateChanged(object sender, AudioStateChangedEventArgs e)
    {
        if (e.AudioState == AudioState.Stopped)
        {
            sre.RecognizeAsyncStop();
            if (RecognitionFinished != null)
            {
                RecognitionFinished(this);
            }
        }
    }

    void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
        string word = e.Result.Text;
        TimeSpan time = e.Result.Audio.AudioPosition;
        if(!timeTags.ContainsKey(word))
        {
            timeTags.Add(word, new List<TimeSpan>());
        } 

        // add the found time
        timeTags[word].Add(time);
    }
}

和我调用它的函数+事件处理者:

[HttpPost]
    public HttpResponseMessage saveFiles()
    {
        if (HttpContext.Current.Request.Files.AllKeys.Any())
        {
            string originalFolder = HttpContext.Current.Server.MapPath("~/files/original/");
            string lowFolder = HttpContext.Current.Server.MapPath("~/files/low/");
            string audioFolder = HttpContext.Current.Server.MapPath("~/files/audio/");
            string voiceoverPath = Path.Combine(originalFolder, Path.GetFileName(HttpContext.Current.Request.Files["voiceover"].FileName));
            string outputFile = HttpContext.Current.Server.MapPath("~/files/output/") + "result.mp4";
            string voiceoverWavPath = Path.Combine(audioFolder, "voiceover.wav");
            var voiceoverInfo = Resource.From(voiceoverWavPath).LoadMetadata().Streams.OfType<AudioStream>().ElementAt(0).Info;
            DirectoryInfo di = new DirectoryInfo(originalFolder);
            // speech recognition
            // get tags from video filenames
            string sTags = "";
            di = new DirectoryInfo(HttpContext.Current.Server.MapPath("~/files/low/"));

            foreach (var item in di.EnumerateFiles())
            {
                string filename = item.Name.Substring(0, item.Name.LastIndexOf("."));
                if (item.Name.ToLower().Contains("thumbs") || filename == "voiceover")
                {
                    continue;
                }
                sTags += filename + ",";
            }
            if (sTags.Length > 0) // remove last ','
            {
                sTags = sTags.Substring(0, sTags.Length - 1);
            }
            string[] tags = sTags.Split(new char[] { ',' });

            // HERE STARTS THE PROBLEMATIC PART! ----------------------------------------------------
            var task = GetSpeechActionsCalculated(voiceoverWavPath, tags, voiceoverInfo.Duration);

            // now return the times to the client
            var finalTimes = GetFinalTimes(HttpContext.Current.Server.MapPath("~/files/low/"), task.Result.timeTags);
            var goodResponse = Request.CreateResponse(HttpStatusCode.OK, finalTimes);
            return goodResponse;
        }
        return Request.CreateResponse(HttpStatusCode.OK, "no files");
    }
    private Task<SpeechActions> GetSpeechActionsCalculated(string voPath, string[] tags, TimeSpan voLength)
    {
        var tcs = new TaskCompletionSource<SpeechActions>();
        SpeechActions sa = new SpeechActions();
        sa.RecognitionFinished += (s) =>
        {
            tcs.TrySetResult((SpeechActions)s);
        };
        sa.CalcTagsAppearancesInVO(voPath, tags, voLength);

        return tcs.Task;
    }

1 个答案:

答案 0 :(得分:2)

您的编辑几乎就在那里,您只需await任务:

[HttpPost]
public async Task<HttpResponseMessage> saveFiles()
{
    if (HttpContext.Current.Request.Files.AllKeys.Any())
    {
        ...

        string[] tags = sTags.Split(new char[] { ',' });

        await GetSpeechActionsCalculated(voiceoverWavPath, tags, voiceoverInfo.Duration);

        // now return the times to the client
        var finalTimes = GetFinalTimes(HttpContext.Current.Server.MapPath("~/files/low/"), task.Result.timeTags);
        var goodResponse = Request.CreateResponse(HttpStatusCode.OK, finalTimes);
        return goodResponse;
    }
    return Request.CreateResponse(HttpStatusCode.OK, "no files");
}