Limits...
Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Payá L, Amorós F, Fernández L, Reinoso O - Sensors (Basel) (2014)

Bottom Line: However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy.With this aim, we make use of several image sets captured in indoor environments under realistic working conditions.The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

View Article: PubMed Central - PubMed

Affiliation: Departamento de Ingeniería de Sistemas y Automática, Miguel Hernández University, Avda. de la Universidad s/n, Elche (Alicante), Spain. lpaya@umh.es.

ABSTRACT
Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

No MeSH data available.


(a) Sample image filtered with k4 = 4 Gabor filters with {0, 45, 90, 135}deg orientation in 2 scales and (b) extraction of the values to build the two descriptors from each filtered image with horizontal and vertical cells.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3958237&req=5

f5-sensors-14-03033: (a) Sample image filtered with k4 = 4 Gabor filters with {0, 45, 90, 135}deg orientation in 2 scales and (b) extraction of the values to build the two descriptors from each filtered image with horizontal and vertical cells.

Mentions: Blockification. To reduce the amount of information, we group the pixels of every resulting matrix in blocks by means of computing the average intensity value that have the pixels in each block. Usually, a set of square blocks is defined on the image to carry out the blockification process [25]. However, we have decided to make the block division in a similar fashion as in HOG: first we compute a descriptor with horizontal blocks (to be used with localization purposes) and then a second descriptor with overlapping vertical blocks (to compute the orientation), as shown on Figure 5. This blockification process is a contribution of our work and it provides us with a rotationally invariant gist descriptor.


Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Payá L, Amorós F, Fernández L, Reinoso O - Sensors (Basel) (2014)

(a) Sample image filtered with k4 = 4 Gabor filters with {0, 45, 90, 135}deg orientation in 2 scales and (b) extraction of the values to build the two descriptors from each filtered image with horizontal and vertical cells.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3958237&req=5

f5-sensors-14-03033: (a) Sample image filtered with k4 = 4 Gabor filters with {0, 45, 90, 135}deg orientation in 2 scales and (b) extraction of the values to build the two descriptors from each filtered image with horizontal and vertical cells.
Mentions: Blockification. To reduce the amount of information, we group the pixels of every resulting matrix in blocks by means of computing the average intensity value that have the pixels in each block. Usually, a set of square blocks is defined on the image to carry out the blockification process [25]. However, we have decided to make the block division in a similar fashion as in HOG: first we compute a descriptor with horizontal blocks (to be used with localization purposes) and then a second descriptor with overlapping vertical blocks (to compute the orientation), as shown on Figure 5. This blockification process is a contribution of our work and it provides us with a rotationally invariant gist descriptor.

Bottom Line: However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy.With this aim, we make use of several image sets captured in indoor environments under realistic working conditions.The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

View Article: PubMed Central - PubMed

Affiliation: Departamento de Ingeniería de Sistemas y Automática, Miguel Hernández University, Avda. de la Universidad s/n, Elche (Alicante), Spain. lpaya@umh.es.

ABSTRACT
Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

No MeSH data available.