Limits...
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Payá L, Amorós F, Fernández L, Reinoso O - Sensors (Basel) (2014)

Bottom Line: However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy.With this aim, we make use of several image sets captured in indoor environments under realistic working conditions.The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

View Article: PubMed Central - PubMed

Affiliation: Departamento de Ingeniería de Sistemas y Automática, Miguel Hernández University, Avda. de la Universidad s/n, Elche (Alicante), Spain. lpaya@umh.es.

ABSTRACT
Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

No MeSH data available.


Related in: MedlinePlus

This figure shows the average error during the localization process depending on the descriptor parameters (a) Fourier Signature; (b) HOG; (c) gist and the average step time during localization depending on the descriptor parameters (d) Fourier Signature, (e) HOG; (f) gist.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3958237&req=5

f16-sensors-14-03033: This figure shows the average error during the localization process depending on the descriptor parameters (a) Fourier Signature; (b) HOG; (c) gist and the average step time during localization depending on the descriptor parameters (d) Fourier Signature, (e) HOG; (f) gist.

Mentions: Figure 16 shows the average error during the localization process and the step time. Every time a route image arrives and the position of the robot is estimated using the Monte Carlo algorithm is considered a step. If we compute the localization error at each step (comparing the result of the algorithm with the actual position of the robot) we get the curves at Figure 16a–c. These curves show how the behavior of the Fourier Signature is the most stable independently on the value of k1. HOG presents similar results when the number of cells k2 is between 16 and 64, and gist presents also better results when the number of Gabor masks k4 is high, but the error in all cases is higher comparing to Fourier Signature.


Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Payá L, Amorós F, Fernández L, Reinoso O - Sensors (Basel) (2014)

This figure shows the average error during the localization process depending on the descriptor parameters (a) Fourier Signature; (b) HOG; (c) gist and the average step time during localization depending on the descriptor parameters (d) Fourier Signature, (e) HOG; (f) gist.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3958237&req=5

f16-sensors-14-03033: This figure shows the average error during the localization process depending on the descriptor parameters (a) Fourier Signature; (b) HOG; (c) gist and the average step time during localization depending on the descriptor parameters (d) Fourier Signature, (e) HOG; (f) gist.
Mentions: Figure 16 shows the average error during the localization process and the step time. Every time a route image arrives and the position of the robot is estimated using the Monte Carlo algorithm is considered a step. If we compute the localization error at each step (comparing the result of the algorithm with the actual position of the robot) we get the curves at Figure 16a–c. These curves show how the behavior of the Fourier Signature is the most stable independently on the value of k1. HOG presents similar results when the number of cells k2 is between 16 and 64, and gist presents also better results when the number of Gabor masks k4 is high, but the error in all cases is higher comparing to Fourier Signature.

Bottom Line: However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy.With this aim, we make use of several image sets captured in indoor environments under realistic working conditions.The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

View Article: PubMed Central - PubMed

Affiliation: Departamento de Ingeniería de Sistemas y Automática, Miguel Hernández University, Avda. de la Universidad s/n, Elche (Alicante), Spain. lpaya@umh.es.

ABSTRACT
Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

No MeSH data available.


Related in: MedlinePlus