Real-Time Memory Efficient Multitask Learning Model for Autonomous Driving

Shokhrukh Miraliev, Shakhboz Abdigapporov, Vijay Kakani, Hakil Kim

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Developing a self-driving system is a challenging task that requires a high level of scene comprehension with real-time inference, and it is safety-critical. This study proposes a real-time memory efficient multitask learning-based model for joint object detection, drivable area segmentation, and lane detection tasks. To accomplish this research objective, the encoder-decoder architecture efficiently utilized to handle input frames through shared representation. Comprehensive experiments conducted on a challenging public Berkeley Deep Drive (BDD100 K) dataset. For further performance comparisons, a private dataset consisting of 30 K frames was collected and annotated for the three aforementioned tasks. Experimental results demonstrated the superiority of the proposed method's over existing baseline approaches in terms of computational efficiency, model power consumption and accuracy performance. The performance results for object detection, drivable area segmentation and lane detection tasks showed the highest 77.5 mAP50, 91.9 mIoU and 33.8 mIoU results on BDD100K dataset respectively. In addition, the model achieved 112.29 fps processing speed improving both performance and inference speed results of existing multi-tasking models.

Original languageEnglish
Pages (from-to)247-258
Number of pages12
JournalIEEE Transactions on Intelligent Vehicles
Volume9
Issue number1
DOIs
StatePublished - 1 Jan 2024

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Multitask learning
  • autonomous driving
  • convolutional neural networks
  • drivable area segmentation
  • edge device
  • lane detection
  • object detection

Fingerprint

Dive into the research topics of 'Real-Time Memory Efficient Multitask Learning Model for Autonomous Driving'. Together they form a unique fingerprint.

Cite this