通过语音识别处理onError()调用

时间:2020-05-27 20:36:01

标签: android kotlin speech-recognition voice-recognition

我有一个LifeCycleService,它试图无限记录,并且在调用onError()并发生错误5/6/7/8时,在一次记录后卡住了。我该如何处理并无限录音?

*不要涉及我的所有摘要,因为这可能会使您感到困惑。您可以看到我从prepareVoiceRecording()调用onStartCommand(),并且观察到一个布尔值,每次调用onStartListening()时,该布尔值只会触发onResults(),这会调用searchSafeWord()触发观察者的布尔值。 我在logcat中遇到2种情况:

2020-05-27 22:57:03.583 1260-1260/com.example.invisibleapp D/RescueService: startListening called
2020-05-27 22:57:03.998 1260-1260/com.example.invisibleapp D/RescueService: onReadyForSpeech Called
2020-05-27 22:57:06.504 1260-1260/com.example.invisibleapp D/RescueService: onBeginningOfSpeech Called
2020-05-27 22:57:07.851 1260-1328/com.example.invisibleapp I/le.invisibleap: ProcessProfilingInfo new_methods=1539 is saved saved_to_disk=1 resolve_classes_delay=8000
2020-05-27 22:57:07.854 1260-1260/com.example.invisibleapp D/RescueService: onEndSpeech Called
2020-05-27 22:57:08.080 1260-1260/com.example.invisibleapp D/RescueService: onErrorCalled error is 7

2020-05-27 22:57:33.401 1595-1595/com.example.invisibleapp D/RescueService: startListening called
2020-05-27 22:57:33.760 1595-1595/com.example.invisibleapp D/RescueService: onReadyForSpeech Called
2020-05-27 22:57:37.808 1595-1652/com.example.invisibleapp I/le.invisibleap: ProcessProfilingInfo new_methods=1226 is saved saved_to_disk=1 resolve_classes_delay=8000
2020-05-27 22:57:38.754 1595-1595/com.example.invisibleapp D/RescueService: onErrorCalled error is 6

*有什么建议吗?

我的服务onStartCommand:

override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
        Log.d(TAG, "onStartCommand called")
        super.onStartCommand(intent, flags, startId)
        viewModel = ViewModelProvider.NewInstanceFactory().create(AppViewModel::class.java)
        serviceUp = true
        s1 = ArrayList()
        s2 = ArrayList()
        /*  val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
          audioManager.adjustStreamVolume(AudioManager.STREAM_MUSIC, AudioManager.ADJUST_MUTE, 0)*/
        mLocationManager = getSystemService(AppCompatActivity.LOCATION_SERVICE) as LocationManager
        pref = getSharedPreferences("userPref", Context.MODE_PRIVATE)
        Log.d(TAG, pref.getString("safeWord", "")!!)
        phoneNumber = pref.getString("phoneNumber", "")!!
        /*  val componentName = ComponentName(
              this,
              FullscreenActivity::class.java
          )
             packageManager.setComponentEnabledSetting(
                 componentName,
                 PackageManager.COMPONENT_ENABLED_STATE_DISABLED,
                 PackageManager.DONT_KILL_APP
             )*/
        prepareVoiceRecording()
        viewModel.isRecording.observe(this, androidx.lifecycle.Observer {
            if (serviceUp) {
                if (it) {
                    mSpeechRecognizer.startListening(mSpeechRecognizerIntent)
                        .also { Log.d(TAG, "startListening called") }

                } else {
                    mSpeechRecognizer.stopListening().also { Log.d(TAG, "stopListening called") }
                        .also { viewModel.isRecording.value = true }
                }
            }
        })
        //scheduler.schedule(jobInfo.build())
        // val sleepCoroutine = SleepCoroutine()
        // sleepCoroutine.sleep()
        return START_STICKY
    }

onBind:

    @RequiresApi(Build.VERSION_CODES.O)
    override fun onBind(intent: Intent): IBinder? {
        super.onBind(intent)
        return null
    }

我的用于初始化语音识别器的函数:

    private fun prepareVoiceRecording() {
        mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(applicationContext)
        mSpeechRecognizerIntent.putExtra(
            RecognizerIntent.EXTRA_LANGUAGE_MODEL,
            RecognizerIntent.LANGUAGE_MODEL_FREE_FORM
        )
        mSpeechRecognizerIntent.putExtra(
            RecognizerIntent.EXTRA_LANGUAGE,
            Locale.getDefault()
        )
        mSpeechRecognizer.setRecognitionListener(
            object : RecognitionListener {
                override fun onReadyForSpeech(bundle: Bundle) {
                    Log.d(TAG, "onReadyForSpeech Called")
                }

                override fun onBeginningOfSpeech() {
                    Log.d(TAG, "onBeginningOfSpeech Called")
                }

                override fun onRmsChanged(v: Float) {
                    Log.d(TAG, "onRmsChanged Called")
                }

                override fun onBufferReceived(bytes: ByteArray) {
                    Log.d(TAG, "onBufferReceived Called")
                }

                override fun onEndOfSpeech() {
                    Log.d(TAG, "onEndSpeech Called")
                    isEndOfSpeech = true
                }

                @RequiresApi(Build.VERSION_CODES.M)
                override fun onError(i: Int) {
                    Log.d(TAG, "onErrorCalled error is $i")
                    if (!isEndOfSpeech)
                        return
                }

                @RequiresApi(Build.VERSION_CODES.M)
                override fun onResults(bundle: Bundle?) {
                    Log.d(TAG, "onResults Called")
                    //getting all the matches
                        val matches = bundle
                            .getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION)!!
                        Log.d(TAG, "safeWord is ${pref.getString("safeWord", "")}")
                        searchSafeWord(matches)

                }

                override fun onPartialResults(bundle: Bundle) {
                    Log.d(TAG, "onPartialResults Called")
                }

                override fun onEvent(i: Int, bundle: Bundle) {
                    Log.d(TAG, "onEvent Called")
                }
            })
    }

0 个答案:

没有答案