Abstract
Emotion recognition is a very important technique for ultimate interactions between human beings and artificial intelligence systems. For effective emotion recognition in a continuous-time domain, this article presents a multimodal fusion network which integrates video modality and electroencephalogram (EEG) modality networks. To calculate the attention weights of facial video features and the corresponding EEG features in fusion, a multimodal attention network, that is utilizing bilinear pooling based on low-rank decomposition, is proposed. Finally, continuous domain valence values are computed by using two modality network outputs and attention weights. Experimental results show that the proposed fusion network provides an improved performance of about 6.9% over the video modality network for the MAHNOB human computer interface (MAHNOB-HCI) dataset. Also, we achieved the performance improvement even for our proprietary dataset.
Original language | English |
---|---|
Pages (from-to) | 203814-203826 |
Number of pages | 13 |
Journal | IEEE Access |
Volume | 8 |
DOIs | |
State | Published - 2020 |
Bibliographical note
Publisher Copyright:© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Keywords
- Attention
- EEG
- Emotion recognition
- Multimodal fusion
- Multimodality
- Video