Limits...
Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.


Tree classification results: (a) frame 50; (b) frame 100; (c) frame 200; and (d) frame 800.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472881&req=5

f15-sensors-12-11221: Tree classification results: (a) frame 50; (b) frame 100; (c) frame 200; and (d) frame 800.

Mentions: We render the textured terrain mesh and represent the texture of the ground, trees, and buildings at an average of 11.43 frames per second (FPS) using the Gibbs-MRF model along with the flood-fill algorithm. This is faster than the case where only the Gibbs-MRF model is used (8.37 FPS). After recovering complete scenes in the terrain mesh, we classify objects into tree and building classes. The tree classification results are indicated in blue color in the 2D images in Figure 15. In the 50th and 100th frames, the objects are located far from the robot, so that noise exists in the sensed objects, especially at the corners. When the robot moves closer to the building in the 200th frame, the corner shape is detected accurately. The corner pixels are grouped in the building class. When the robot is located near the trees in the 800th frame, the accuracy of the range sensor is higher than that when the robot is far from the trees. Finally, the noise in the spaces between the trees is removed in the reconstructed terrain mesh.


Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Tree classification results: (a) frame 50; (b) frame 100; (c) frame 200; and (d) frame 800.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472881&req=5

f15-sensors-12-11221: Tree classification results: (a) frame 50; (b) frame 100; (c) frame 200; and (d) frame 800.
Mentions: We render the textured terrain mesh and represent the texture of the ground, trees, and buildings at an average of 11.43 frames per second (FPS) using the Gibbs-MRF model along with the flood-fill algorithm. This is faster than the case where only the Gibbs-MRF model is used (8.37 FPS). After recovering complete scenes in the terrain mesh, we classify objects into tree and building classes. The tree classification results are indicated in blue color in the 2D images in Figure 15. In the 50th and 100th frames, the objects are located far from the robot, so that noise exists in the sensed objects, especially at the corners. When the robot moves closer to the building in the 200th frame, the corner shape is detected accurately. The corner pixels are grouped in the building class. When the robot is located near the trees in the 800th frame, the accuracy of the range sensor is higher than that when the robot is far from the trees. Finally, the noise in the spaces between the trees is removed in the reconstructed terrain mesh.

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.