BackgroundWorker中的SpeechRecognitionEngine

时间:2012-02-24 00:34:05

标签: c# backgroundworker speech-recognition sapi

我正在尝试使用Windows Forms和System.Speech编写一个C#应用程序来将WAV文件转换为文本。我在网上看到了很多关于如何做到这一点的样本,但没有一个非常强大。我希望编写一个可以使用BackgroundWorker线程解析大型WAV文件的小块的应用程序,但是当我调用engine.Recognize()时,我的线程的DoWork函数中仍然出现以下异常:

“没有为此识别器提供音频输入。如果麦克风连接到系统,请使用方法SetInputToDefaultAudioDevice,否则使用SetInputToWaveFile,SetInputToWaveStream或SetInputToAudioStream从预先录制的音频执行语音识别”

以下是我的DoWork()函数中的代码:

SpeechRecognitionEngine engine = new SpeechRecognitionEngine(new    System.Globalization.CultureInfo("en-US"));
engine.SetInputToWaveFile(fname);
engine.LoadGrammar(new DictationGrammar());
engine.BabbleTimeout = TimeSpan.FromSeconds(10.0);
engine.EndSilenceTimeout = TimeSpan.FromSeconds(10.0);
engine.EndSilenceTimeoutAmbiguous = TimeSpan.FromSeconds(10.0);
engine.InitialSilenceTimeout = TimeSpan.FromSeconds(10.0);

BackgroundWorker w = (BackgroundWorker)sender;
while (true)
{    
RecognitionResult data = engine.Recognize();
if (data == null)
    break;
if (w == null) //our thread died from beneath us
    break;
if (!w.IsBusy) //our thread died from beneath us
    break;
if (w.CancellationPending) //notice to cancel
    break;
w.ReportProgress(0, data.Text);
}

我正在启动运行此代码的多个BackgroundWorker线程。如果我使用单个线程,我不会看到这个问题。

1 个答案:

答案 0 :(得分:1)

您可以尝试这种方法。我为Console和Windows Forms应用程序类型测试了它。

class Program {
    public static void Main() {
        var r1 = new Recognizer(@"c:\proj\test.wav");
        r1.Completed += (sender, e) => Console.WriteLine(r1.Result.Text);

        var r2 = new Recognizer(@"c:\proj\test.wav");
        r2.Completed += (sender, e) => Console.WriteLine(r2.Result.Text);

        Console.ReadLine();
    }
}

class Recognizer {
    private readonly string _fileName;
    private readonly AsyncOperation _operation;
    private volatile RecognitionResult _result;

    public Recognizer(string fileName) {
        _fileName = fileName;
        _operation = AsyncOperationManager.CreateOperation(null);            
        _result = null;

        var worker = new Action(Run);
        worker.BeginInvoke(delegate(IAsyncResult result) {
            worker.EndInvoke(result);
        }, null);            
    }

    private void Run() {
        try {
            SpeechRecognitionEngine engine = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US"));
            engine.SetInputToWaveFile(_fileName);
            engine.LoadGrammar(new DictationGrammar());
            engine.BabbleTimeout = TimeSpan.FromSeconds(10.0);
            engine.EndSilenceTimeout = TimeSpan.FromSeconds(10.0);
            engine.EndSilenceTimeoutAmbiguous = TimeSpan.FromSeconds(10.0);
            engine.InitialSilenceTimeout = TimeSpan.FromSeconds(10.0);
            _result = engine.Recognize();
        }
        finally {
            _operation.PostOperationCompleted(delegate {
                RaiseCompleted();
            }, null);
        }
    }

    public RecognitionResult Result {
        get { return _result; }
    }

    public event EventHandler Completed;

    protected virtual void OnCompleted(EventArgs e) {
        if (Completed != null)
            Completed(this, e);
    }

    private void RaiseCompleted() {
        OnCompleted(EventArgs.Empty);
    }
}