Full metadata
Title
Improving upon the State-of-the-Art in Multimodal Emotional Recognition in Dialogue
Description
Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of this process. This thesis aims to improve the state-of-the-art method by incorporating two components to better accomplish these steps. By doing so, we are able to produce improved representations for the text modality and better model the relationships between all modalities. This paper proposes two methods which focus on these concepts and provide improved accuracy over the state-of-the-art framework for multimodal emotion recognition in dialogue.
Date Created
2020-05
Contributors
- Rawal, Siddharth (Author)
- Baral, Chitta (Thesis director)
- Shah, Shrikant (Committee member)
- Computer Science and Engineering Program (Contributor)
- Barrett, The Honors College (Contributor)
Topical Subject
Resource Type
Extent
22 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Series
Academic Year 2019-2020
Handle
https://hdl.handle.net/2286/R.I.56729
Level of coding
minimal
Cataloging Standards
System Created
- 2020-05-02 12:12:04
System Modified
- 2021-08-11 04:09:57
- 3 years 3 months ago
Additional Formats