Navigation of a mobile robot using mono-vision and mono-audition

W. Choi, C. Ryu, H. Kim

Research output: Contribution to journalConference articlepeer-review

18 Scopus citations

Abstract

Incorporation of various types of sensors increases the degrees of autonomy and intelligence of mobile robots (mobots) in perceiving surroundings, which at the same time imposes a large computational burden on data processing. The purpose of the research presented in this paper is to develop a low-cost multi-sensor system and incidental algorithms for autonomous navigation of a mobot. This paper proposes digital image processing schemes for map-building and localization of a mobot using a monocular vision system and a single ultrasonic sensor in indoor environments. In localization, the camera calibration is preceded so that the depth information can be acquired from the image obtained by a single camera. For map-building, fast and effective image processing techniques based on morphology are applied to reduce computational complexities. For preliminary experiments, we have integrated a mobot system whose main components are a mono-vision system, a single ultrasonic sensor, and a notebook PC, mounted on a mobile base. The proposed algorithms were implemented, and the mobot was able to localize itself in an allowed position error range and to locate dynamic obstacles moving reasonably fast inside a building. The overall results demonstrate the suitability of the proposed methods for developing autonomous service mobots in indoor environments.

Original languageEnglish
Pages (from-to)IV-686 - IV-691
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume4
StatePublished - 1999
Event1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
Duration: 12 Oct 199915 Oct 1999

Fingerprint

Dive into the research topics of 'Navigation of a mobile robot using mono-vision and mono-audition'. Together they form a unique fingerprint.

Cite this