Limits...
Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.


Grid-based ground modeling.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472881&req=5

f2-sensors-12-11221: Grid-based ground modeling.

Mentions: We integrate the sensed dataset into a grid-based textured terrain mesh. First, we project the 3D points onto the 2D image in front of the robot and get a coordinate in 2D image, named UV vector, for each 3D point. Then, we transform the local 3D points into global coordinates, and register them on the terrain mesh. The terrain mesh is generated using several grids, each with 151 × 151 textured vertices. In this application, the cell size is 0.125 × 0.125 m2. The height value of each cell is updated with the registered 3D points. If a new 3D point is to be inserted into the reconstructed terrain mesh but is outside the existing grids, we create a new grid to register this point, as shown in Figure 2.


Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Grid-based ground modeling.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472881&req=5

f2-sensors-12-11221: Grid-based ground modeling.
Mentions: We integrate the sensed dataset into a grid-based textured terrain mesh. First, we project the 3D points onto the 2D image in front of the robot and get a coordinate in 2D image, named UV vector, for each 3D point. Then, we transform the local 3D points into global coordinates, and register them on the terrain mesh. The terrain mesh is generated using several grids, each with 151 × 151 textured vertices. In this application, the cell size is 0.125 × 0.125 m2. The height value of each cell is updated with the registered 3D points. If a new 3D point is to be inserted into the reconstructed terrain mesh but is outside the existing grids, we create a new grid to register this point, as shown in Figure 2.

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.