Self-attention deep ConvLSTM with sparse-learned channel dependencies for wearable sensor-based human activity recognition

Shan Ullah, Mehdi Pirahandeh, Deok Hwan Kim

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In this study, we propose a novel deep-learning architecture with sparse learning for human activity recognition. The proposed model contains 1D CNNs and LSTM layers with a self-attention mechanism to enhance a substantial number of time points in time-series data for human activity recognition systems. Based on the recent success of squeeze-and-excite (SE) networks, the proposed deep learning model utilizes the SE module to enhance channel-wise interdependencies, which in turn leads to a boost in performance. In addition, we utilized sparse learning to retrain only weak nodes and freeze stronger nodes in a fully connected layer prior to classification layer. Furthermore, we utilized an entropy-inspired formula to find sparsely located weaker nodes and validated our model on various datasets, including Opportunity, UCI-HAR, and WISDM. Herein, we present an extensive analysis and survey of state-of-the-art studies, in addition to our proposed research. For a fair comparison, we evaluated our deep learning architecture using various performance metrics and achieved better results; the proposed model outperformed state-of-the-art algorithms for human activity recognition.

Original languageEnglish
Article number127157
JournalNeurocomputing
Volume571
DOIs
StatePublished - 28 Feb 2024

Bibliographical note

Publisher Copyright:
© 2023 Elsevier B.V.

Keywords

  • CNN
  • Human activity recognition
  • LSTM
  • Self-attention
  • Sparse learning

Fingerprint

Dive into the research topics of 'Self-attention deep ConvLSTM with sparse-learned channel dependencies for wearable sensor-based human activity recognition'. Together they form a unique fingerprint.

Cite this