A Reinforcement Learning Based Dynamic Duty-Cycle Mode Selection Method in Wireless Sensor Networks

Seong Ho Youn, Sue Yeon Choi, So Myeong Kim, Wan Kyu Yun, Seung Hee Choi, Sang Jo Yoo

Research output: Contribution to journalArticlepeer-review

Abstract

Wireless Sensor Network monitors the environment in real-time by continuously collecting data from sensor nodes and sending it to the sync. Sensors have limited resources, so it is important to use energy efficiently. In addition, object tracking accuracy is also an important requirement when object tracking is performed in wireless sensor networks. To satisfy both at a high level, this paper proposes to dynamically switch sensing modes by predicting future movements of objects. Once the sensor nodes have synchronized sensing data to define the current state of object speed and direction. We use Q-learning to put it into a wake-up mode that temporarily leaves the optimal sensor region for each state on. Through simulations, we confirm that the proposed method increases energy efficiency with certain levels of accuracy satisfied.

Original languageEnglish
Pages (from-to)2198-2211
Number of pages14
JournalJournal of Korean Institute of Communications and Information Sciences
Volume46
Issue number12
DOIs
StatePublished - 1 Dec 2021

Bibliographical note

Publisher Copyright:
© 2021, Korean Institute of Communications and Information Sciences. All rights reserved.

Keywords

  • duty cycle
  • dynamic scheduling
  • Q-learning
  • Wireless Sensor Network

Fingerprint

Dive into the research topics of 'A Reinforcement Learning Based Dynamic Duty-Cycle Mode Selection Method in Wireless Sensor Networks'. Together they form a unique fingerprint.

Cite this