Limits...
Information-Driven Active Audio-Visual Source Localization.

Schult N, Reineking T, Kluss T, Zetzsche C - PLoS ONE (2015)

Bottom Line: These actions by the robot successively reduce uncertainty about the source's position.Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments.We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

View Article: PubMed Central - PubMed

Affiliation: Cognitive Neuroinformatics, Bremen University, Bremen, Germany.

ABSTRACT
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

No MeSH data available.


Robot and Robohead.A) The robot we use for evaluating the proposed approach. B) The robot is equipped with a rotatable head, which features an integrated camera, biologically-realistic pinnae, and in-ear stereo microphones.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4556528&req=5

pone.0137057.g001: Robot and Robohead.A) The robot we use for evaluating the proposed approach. B) The robot is equipped with a rotatable head, which features an integrated camera, biologically-realistic pinnae, and in-ear stereo microphones.

Mentions: We implemented the system on a mobile robot (Pioneer P3-DX), on which we mounted a robot head which can perform -90 to +90 degrees rotations in the azimuth plane and -30 to +30 degrees rotations in the median plane. The robot’s head features an integrated camera system as well as in-ear stereo microphones, which are mounted in biologically-realistic human-like pinnae (Kemar KB0065/66) attached to the sides of the robot’s head (see Fig 1). This setup is designed to mimic the human outer ear system (that is, pinna, auditory canal and eardrum) in order to use a biologically realistic setup and to allow basic modeling of human auditory processes.


Information-Driven Active Audio-Visual Source Localization.

Schult N, Reineking T, Kluss T, Zetzsche C - PLoS ONE (2015)

Robot and Robohead.A) The robot we use for evaluating the proposed approach. B) The robot is equipped with a rotatable head, which features an integrated camera, biologically-realistic pinnae, and in-ear stereo microphones.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4556528&req=5

pone.0137057.g001: Robot and Robohead.A) The robot we use for evaluating the proposed approach. B) The robot is equipped with a rotatable head, which features an integrated camera, biologically-realistic pinnae, and in-ear stereo microphones.
Mentions: We implemented the system on a mobile robot (Pioneer P3-DX), on which we mounted a robot head which can perform -90 to +90 degrees rotations in the azimuth plane and -30 to +30 degrees rotations in the median plane. The robot’s head features an integrated camera system as well as in-ear stereo microphones, which are mounted in biologically-realistic human-like pinnae (Kemar KB0065/66) attached to the sides of the robot’s head (see Fig 1). This setup is designed to mimic the human outer ear system (that is, pinna, auditory canal and eardrum) in order to use a biologically realistic setup and to allow basic modeling of human auditory processes.

Bottom Line: These actions by the robot successively reduce uncertainty about the source's position.Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments.We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

View Article: PubMed Central - PubMed

Affiliation: Cognitive Neuroinformatics, Bremen University, Bremen, Germany.

ABSTRACT
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

No MeSH data available.