Multimodal attention network for continuous-time emotion recognition using video and EEG signals

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

Emotion recognition is a very important technique for ultimate interactions between human beings and artificial intelligence systems. For effective emotion recognition in a continuous-time domain, this article presents a multimodal fusion network which integrates video modality and electroencephalogram (EEG) modality networks. To calculate the attention weights of facial video features and the corresponding EEG features in fusion, a multimodal attention network, that is utilizing bilinear pooling based on low-rank decomposition, is proposed. Finally, continuous domain valence values are computed by using two modality network outputs and attention weights. Experimental results show that the proposed fusion network provides an improved performance of about 6.9% over the video modality network for the MAHNOB human computer interface (MAHNOB-HCI) dataset. Also, we achieved the performance improvement even for our proprietary dataset.

Original languageEnglish
Pages (from-to)203814-203826
Number of pages13
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020

Bibliographical note

Publisher Copyright:
© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.

Keywords

  • Attention
  • EEG
  • Emotion recognition
  • Multimodal fusion
  • Multimodality
  • Video

Fingerprint

Dive into the research topics of 'Multimodal attention network for continuous-time emotion recognition using video and EEG signals'. Together they form a unique fingerprint.

Cite this