Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Graph structure generation from respective sensor data. The constraints and nodes are generated from the prediction from each sensor. The final pose of the robot is estimated using graph optimization.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f4-sensors-15-15830: Graph structure generation from respective sensor data. The constraints and nodes are generated from the prediction from each sensor. The final pose of the robot is estimated using graph optimization.

Mentions: After installing multi-sensors on the robot, the pose of the robot is estimated using the respective sensors, and then, a hybrid algorithm fuses the respective results, as shown in Figure 4. The covariance values and the measurement values from each sensor are used for generating the constraints of the graph structure. The final corrected robot pose is obtained by graph optimization by organizing the graph structure using the generated constraints information. Odometry information is generated by encoders, and this information is used for generating the pose constraint. A 2D grid map is made using depth data from the 2D laser scanner. The 2D robot pose constraint is then generated from the 2D grid map and ICP matching using GMapping [4]. After extracting the feature points by the SURF (speeded up robust features) [29] algorithm from a camera image, the 3D coordinate of each feature point is obtained by augmenting the depth information from the 2D laser scanner. The 3D robot pose constraint is then generated from the 3D coordinates of the feature points. The hybrid method of obtaining the robot pose using a monocular camera and a laser scanner was described in detail in the previous subsection.


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Graph structure generation from respective sensor data. The constraints and nodes are generated from the prediction from each sensor. The final pose of the robot is estimated using graph optimization.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f4-sensors-15-15830: Graph structure generation from respective sensor data. The constraints and nodes are generated from the prediction from each sensor. The final pose of the robot is estimated using graph optimization.
Mentions: After installing multi-sensors on the robot, the pose of the robot is estimated using the respective sensors, and then, a hybrid algorithm fuses the respective results, as shown in Figure 4. The covariance values and the measurement values from each sensor are used for generating the constraints of the graph structure. The final corrected robot pose is obtained by graph optimization by organizing the graph structure using the generated constraints information. Odometry information is generated by encoders, and this information is used for generating the pose constraint. A 2D grid map is made using depth data from the 2D laser scanner. The 2D robot pose constraint is then generated from the 2D grid map and ICP matching using GMapping [4]. After extracting the feature points by the SURF (speeded up robust features) [29] algorithm from a camera image, the 3D coordinate of each feature point is obtained by augmenting the depth information from the 2D laser scanner. The 3D robot pose constraint is then generated from the 3D coordinates of the feature points. The hybrid method of obtaining the robot pose using a monocular camera and a laser scanner was described in detail in the previous subsection.

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.