SpeechRecognitionEngine停止识别计算机何时被锁定

时间:2016-07-05 18:25:17

标签: c# speech-recognition audio-recording

我正在尝试创建一个语音识别程序,该程序需要在锁定的Windows计算机上运行,​​作为家庭自动化项目的一部分。但似乎SpeechRecognitionEngine停止识别计算机何时被锁定(并在计算机解锁时继续)。

我目前的测试程序如下:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.Speech.Recognition;
using System.Globalization;

namespace WindowsFormsApplication1
{
    public partial class Form1 : Form
    {
        SpeechRecognitionEngine sre;

        public Form1()
        {
            InitializeComponent();
            CultureInfo ci = new CultureInfo("en-us");
            sre = new SpeechRecognitionEngine(ci);
            sre.SetInputToDefaultAudioDevice();
            GrammarBuilder gb = new GrammarBuilder("Hello");
            sre.LoadGrammarAsync(new Grammar(gb));
            sre.SpeechRecognized += sre_SpeechRecognized;
            sre.RecognizeAsync(RecognizeMode.Multiple);
        }

        void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {
            listBox1.Items.Add(DateTime.Now.ToString() + " " + e.Result.Text);
        }
    }
}

我想知道是否可以将SpeechRecognitionEngine的输入(可能使用SetInputToAudioStreamSetInputToWaveStream方法)更改为麦克风输入的实时音频流,这样可以解决问题。因为似乎计算机时未关闭麦克风(尝试使用SoundRecorder)。

不幸的是,我无法找到获得麦克风输入直播的方法。

1 个答案:

答案 0 :(得分:4)

我找到了使用NAudio(http://naudio.codeplex.com/)和StackOverflow答案(https://stackoverflow.com/a/11813276/2950065)中的SpeechStreamer类的解决方法。

更新后的测试程序会继续识别计算机何时被锁定,如下所示:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.Speech.Recognition;
using System.Globalization;
using NAudio.Wave;
using System.IO;
using System.IO.Pipes;

namespace WindowsFormsApplication1
{
    public partial class Form1 : Form
    {
        SpeechRecognitionEngine sre;
        WaveIn wi;
        SpeechStreamer ss;

        public Form1()
        {
            InitializeComponent();

            WaveCallbackInfo callbackInfo = WaveCallbackInfo.FunctionCallback();
            wi = new WaveIn(callbackInfo);
            ss = new SpeechStreamer(100000);
            wi.DataAvailable += wi_DataAvailable;
            wi.StartRecording();

            CultureInfo ci = new CultureInfo("en-us");
            sre = new SpeechRecognitionEngine(ci);
            // The default format for WaveIn is 8000 samples/sec, 16 bit, 1 channel
            Microsoft.Speech.AudioFormat.SpeechAudioFormatInfo safi = new Microsoft.Speech.AudioFormat.SpeechAudioFormatInfo(8000, Microsoft.Speech.AudioFormat.AudioBitsPerSample.Sixteen, Microsoft.Speech.AudioFormat.AudioChannel.Mono);
            sre.SetInputToAudioStream(ss, safi);
            GrammarBuilder gb = new GrammarBuilder("Hello");
            sre.LoadGrammarAsync(new Grammar(gb));
            sre.SpeechRecognized += sre_SpeechRecognized;
            sre.RecognizeAsync(RecognizeMode.Multiple);
        }

        void wi_DataAvailable(object sender, WaveInEventArgs e)
        {
            ss.Write(e.Buffer, 0, e.BytesRecorded);
        }

        void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {
            listBox1.Items.Add(DateTime.Now.ToString() + " " + e.Result.Text);
        }
    }
}