The multimodal correction detection problem

Amos Azaria, Keren Nivasch

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In order for socially aware agents to be truly useful, they should have abilities associated with human intelligence, such as the ability to detect their own mistakes from user reactions. This is an instance of implicit feedback. In this work we address the problem of detecting an agent's mistakes by identifying when the user tries to correct the agent. We refer to this problem as the Correction Detection task. We use a multimodal approach, using both the voice (acoustics and non-verbal sounds) as well as the transcript of the user's spoken commands.

Original languageEnglish
Title of host publication18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Pages1784-1786
Number of pages3
ISBN (Electronic)9781510892002
StatePublished - 2019
Event18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 - Montreal, Canada
Duration: 13 May 201917 May 2019

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume3
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Country/TerritoryCanada
CityMontreal
Period13/05/1917/05/19

Keywords

  • Correction detection
  • Human-agent interaction
  • Implicit feedback
  • Multimodal deep learning architecture
  • Socially aware personal assistant

Fingerprint

Dive into the research topics of 'The multimodal correction detection problem'. Together they form a unique fingerprint.

Cite this