使用语音识别时聊天机器人视图未更新-Angular

时间:2019-11-18 23:41:31

标签: angular chatbot

我正在尝试使用Dialogflow Api和语音识别在Angular中建立一个聊天机器人。我能够将语音转换为文本并接收响应,但是我无法立即在聊天窗口中呈现响应在输入框上单击后,将显示响应。

下面是我的“聊天”弹出对话框的代码

<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
  </head>
  <body>
    <div class="chat_box">
      <div class="chat_header">
        <h4 class="username">Virtual agent</h4>
      </div>
      <hr />
      <div class="slideOnClick">
      <div #content class="message_content">
          <ng-container #messageList *ngFor="let message of messagesToDisplay">

              <div class="message" [ngClass]="{ 'from': message.sentBy === 'bot',
                                                'to':   message.sentBy === 'user' }">
                {{ message.content }}
              </div>

            </ng-container>
      </div>
      <div class="input_box">
        <input [(ngModel)]="formValue" (keyup.enter)="sendMessage()" placeholder="Your message here..." type="text">
          <button mat-button class="sendButton" [disabled]="!startListenButton" (click)="activateSpeechSearch()"><i class="fa fa-microphone"></i></button>
      </div>
    </div>
    </div>
  </body>
</html>

下面是相应的组件文件

export class ChatPopupComponent implements OnInit, AfterViewInit, OnChanges {
  @ViewChildren('messageList') messageList: QueryList<any>;
  @ViewChild('content', undefined) content: ElementRef;
  messages: Observable<Message[]>;
  formValue: string;
  chatVisible = true;
  startListenButton: boolean;
  speechData: string;
  isPopupOpened: boolean;
  messagesToDisplay: Message[];

  constructor(public chat: PopupServiceService, private speechRecognitionService: SpeechRecognitionService,
              private cd: ChangeDetectorRef) { }

  ngOnInit() {
    this.startListenButton = true;
    this.isPopupOpened = false;
    console.log('Change detected');
    this.messages = this.chat.conversation.asObservable().pipe(scan((acc, val) => acc.concat(val))
    );
    this.messages.subscribe(messageResponse => {
      this.messagesToDisplay = messageResponse;
    });
    // tslint:disable-next-line:only-arrow-functions
    (function($) {
      // tslint:disable-next-line:only-arrow-functions
      $(document).ready(function() {
        $('.slideOnClick').hide();
        // tslint:disable-next-line:only-arrow-functions
        $('.chat_header').unbind('click').click(function() {
          console.log('Inside click');
          // this.isPopupOpened = !this.isPopupOpened;
          $('.slideOnClick').slideToggle('slow');
        });
      });
    })(jQuery);
  }

  ngAfterViewInit() {
    this.messageList.changes.subscribe(this.scrollToBottom);
    console.log('After view init');
  }

  scrollToBottom = () => {
    try {
      this.content.nativeElement.scrollTop = this.content.nativeElement.scrollHeight;
    } catch (err) {}
  }

  ngOnChanges() {
    console.log('After view changed');
    this.cd.detectChanges();
  }

  activateSpeechSearch(): void {
    this.startListenButton = false;

    this.speechRecognitionService.record()
        .subscribe(
        // listener
        (value) => {
            this.speechData = value;
            this.formValue = value;
            console.log('listener.speechData:', value);
        },
        // error
        (err) => {
            console.log(err);
            if (err.error === 'no-speech') {
                console.log('--restarting service--');
                this.speakError();
            }
        },
        // completion
        () => {
            console.log('--complete--');
            console.log(this.formValue);
            this.startListenButton = true;
            if (this.formValue !== '' && !isNullOrUndefined(this.formValue)) {
              this.sendMessageFromSpeechRecognition();
            } else {
              this.speakError();
            }
        });
  }


  speakError() {
    this.startListenButton = true;
    this.speechRecognitionService.DestroySpeechObject();
    const synth = window.speechSynthesis;
    const utterance = new SpeechSynthesisUtterance();
    utterance.text = 'I am sorry! Can you please type or tell that one more time after pressing the microphone button?';
    synth.speak(utterance);
    this.cd.detectChanges();
  }

  sendMessageFromSpeechRecognition(): void {
    this.speechRecognitionService.DestroySpeechObject();
    this.sendMessage();
  }

  sendMessage() {
    console.log('Inside send message');
    this.chat.converse(this.formValue);
    this.formValue = '';
    console.log(this.formValue);
    this.cd.detectChanges();
  }

}

我尝试使用detectChanges(),但即使这样也不起作用。例如,如果我说“你好”,我可以在聊天框中看到我的输入,但是响应在视图中没有得到更新(变量messagestoDisplay包含用户输入和聊天响应的数组)。任何帮助将不胜感激!

Chat bot view

0 个答案:

没有答案