我很难弄清楚如何解决这个问题,我不确定我是不是正确设置了线程,或者是否有可能正确解决问题。
这是一款Android应用,可以在某些时间将某些字符串读取为TTS(使用原生Android TTS)。在此TTS读取期间,用户应能够使用诸如“停止”或“暂停”之类的指令进行插入。这种识别是通过使用iSpeech API完成的。
我们当前的解决方案是让TTS作为线程运行,输出正确的字符串。一旦用户按下按钮开始语音识别(使用Intent),应用程序就会进行语音识别并完美地处理它,但TTS再也不会输出任何内容。 Logcat显示以下错误:
11-28 02:18:57.072:W / TextToSpeech(16383):说话失败:未绑定到TTS引擎
我已经考虑过将语音识别作为一个暂停TTS的自己的线程,但问题是控制TTS的计时器会因为应该是什么而变得不同。
任何建议或帮助都将不胜感激。
有关线程和意图的相关代码如下:
主题
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//Prevent device from sleeping mid build.
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_build_order);
mPlayer = MediaPlayer.create(BuildOrderActivity.this, R.raw.bing);
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,"stringId");
tts = new TextToSpeech(BuildOrderActivity.this, new TextToSpeech.OnInitListener() {
@SuppressWarnings("deprecation")
public void onInit(int status) {
if(status != TextToSpeech.ERROR)
{
tts.setLanguage(Locale.US);
tts.setOnUtteranceCompletedListener(new OnUtteranceCompletedListener() {
public void onUtteranceCompleted(String utteranceId) {
mPlayer.start();
}
});
}
}
});
buttonStart = (Button) findViewById(R.id.buttonStartBuild);
buttonStart.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
startBuild = new StartBuildRunnable();
Thread t = new Thread(startBuild);
t.start();
}
});
...//code continues oncreate setup for the view}
public class StartBuildRunnable implements Runnable {
public void run() {
double delay;
buildActions = parseBuildXMLAction();
buildTimes = parseBuildXMLTime();
say("Build has started");
delayForNextAction((getSeconds(buildTimes.get(0)) * 1000));
say(buildActions.get(0));
for (int i = 1; i < buildActions.size(); i++)
{
delay = calcDelayUntilNextAction(buildTimes.get(i - 1), buildTimes.get(i));
delayForNextAction((long) (delay * 1000));
say(buildActions.get(i));
//listViewBuildItems.setSelection(i);
}
say("Build has completed");
}
}
意图
/**
* Fire an intent to start the speech recognition activity.
* @throws InvalidApiKeyException
*/
private void startRecognition() {
setupFreeFormDictation();
try {
recognizer.startRecord(new SpeechRecognizerEvent() {
@Override
public void onRecordingComplete() {
updateInfoMessage("Recording completed.");
}
@Override
public void onRecognitionComplete(SpeechResult result) {
Log.v(TAG, "Recognition complete");
//TODO: Once something is recognized, tie it to an action and continue recognizing.
// currently recognizes something in the grammar and then stops listening until
// the next button press.
if (result != null) {
Log.d(TAG, "Text Result:" + result.getText());
Log.d(TAG, "Text Conf:" + result.getConfidence());
updateInfoMessage("Result: " + result.getText() + "\n\nconfidence: " + result.getConfidence());
} else
Log.d(TAG, "Result is null...");
}
@Override
public void onRecordingCancelled() {
updateInfoMessage("Recording cancelled.");
}
@Override
public void onError(Exception exception) {
updateInfoMessage("ERROR: " + exception.getMessage());
exception.printStackTrace();
}
});
} catch (BusyException e) {
e.printStackTrace();
} catch (NoNetworkException e) {
e.printStackTrace();
}
}