Limits...
Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.


Projection results as green pixels in an image.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472881&req=5

f4-sensors-12-11221: Projection results as green pixels in an image.

Mentions: Then we find the projected pixels in the 2D image from the points in G1, using the projection matrix as follows:(7)t=KR[I/−Cam]Twhere the homogeneous coordinates of image pixel t are projected from the homogeneous coordinates of the 3D point T. Cam is defined as the vector of the camera's position, the matrix R is defined as the mobile rotation matrix, and I is an identity matrix. The camera calibration matrix K is defined as follows:(8)K=[l0px0lpy001]where l is the focal length of the camera, and the 2D coordinate (px, py) is the center position of the captured image. As shown in Figure 4, the 2D pixel dataset is mapped from the dataset G1. We determine the configuration of site as ground.


Complete scene recovery and terrain classification in textured terrain meshes.

Song W, Cho K, Um K, Won CS, Sim S - Sensors (Basel) (2012)

Projection results as green pixels in an image.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472881&req=5

f4-sensors-12-11221: Projection results as green pixels in an image.
Mentions: Then we find the projected pixels in the 2D image from the points in G1, using the projection matrix as follows:(7)t=KR[I/−Cam]Twhere the homogeneous coordinates of image pixel t are projected from the homogeneous coordinates of the 3D point T. Cam is defined as the vector of the camera's position, the matrix R is defined as the mobile rotation matrix, and I is an identity matrix. The camera calibration matrix K is defined as follows:(8)K=[l0px0lpy001]where l is the focal length of the camera, and the 2D coordinate (px, py) is the center position of the captured image. As shown in Figure 4, the 2D pixel dataset is mapped from the dataset G1. We determine the configuration of site as ground.

Bottom Line: Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor.However, parts of objects that are outside the measurement range of the range sensor will not be detected.Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds.

View Article: PubMed Central - PubMed

Affiliation: Department of Multimedia Engineering, Dongguk University-Seoul, 26 Pildosng 3 Ga, Jung-gu, Seoul 100-715, Korea. songwei@dongguk.edu

ABSTRACT
Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

No MeSH data available.