Hierarchical Context in Conversations for Spoken Language Understanding

Dr. Chandrakant Bothe
2 min readJun 26, 2018

The context in a language comes from their sequential patterns. For example, a sentence is formed by a sequence of words, a conversation is formed by a sequence of utterances, and so on. There have been many approaches to model such a deep concept. The neural approaches have been deployed to achieve state-of-the-art results.

Example: A piece of conversation with Dialogue Act annotation

As you can see in the small conversation example, Utt2 and Utt4 are very same (in this case same “Yeah”), however, the dialogue acts are very different. As they come from their context utterances Utt1 and Utt3 respectively. If it appears after a ‘Yes-No Question’ dialogue act it is more likely to have ‘Yes-Answer’ rather than ‘Backchannel’ or any other dialogue act. The presented example is from the Switchboard Dialogue Act (SwDA) Corpus, which is annotated with 42 such dialogue acts.

Hence the context-based approach, which can take the preceding utterance (or utterances) into account, is crucial for language understanding modules in any dialogue engines.

Context-based Dialogue Act Recognition using Utterance-level DA Recognizer

We use the recurrent neural networks (RNN) to model such a context for conversational analysis as shown in the figure above. We bind the neural models into an online-server for a live web-demo application Discourse-Wizard, which is hosted on the official website of EU MSCA project named SECURE.

The link is provided for the live web-demo here:

Or visit the full website is here:

The above is a live demo where you can analyze your spoken-text conversation, entered line by line for each turn. We hope this live web-demo will be useful in the conversational analysis applications for enthusiastic AI and NLP followers.

--

--